Test Report: KVM_Linux_crio 19313

                    
                      761b7fc65973460b6ca8311b028efa5f69b15d0b:2024-07-22:35453
                    
                

Test fail (30/326)

Order failed test Duration
39 TestAddons/parallel/Ingress 150.7
41 TestAddons/parallel/MetricsServer 334.9
54 TestAddons/StoppedEnableDisable 154.19
106 TestFunctional/parallel/PersistentVolumeClaim 187.83
173 TestMultiControlPlane/serial/StopSecondaryNode 141.64
175 TestMultiControlPlane/serial/RestartSecondaryNode 62.45
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 353.48
180 TestMultiControlPlane/serial/StopCluster 141.8
240 TestMultiNode/serial/RestartKeepsNodes 323.1
242 TestMultiNode/serial/StopMultiNode 141.41
249 TestPreload 275.42
257 TestKubernetesUpgrade 474
300 TestStartStop/group/old-k8s-version/serial/FirstStart 265.49
305 TestStartStop/group/no-preload/serial/Stop 139.29
310 TestStartStop/group/embed-certs/serial/Stop 139.21
313 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
314 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.28
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.03
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
324 TestStartStop/group/old-k8s-version/serial/SecondStart 710.29
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.98
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.11
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.16
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.27
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 498.14
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 533.6
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 298.79
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 163
x
+
TestAddons/parallel/Ingress (150.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-362127 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-362127 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-362127 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b92b935b-6089-4609-bf2a-f636364a6400] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b92b935b-6089-4609-bf2a-f636364a6400] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003220586s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-362127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.557104926s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-362127 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.147
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 addons disable ingress-dns --alsologtostderr -v=1: (1.552621816s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 addons disable ingress --alsologtostderr -v=1: (7.648275597s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-362127 -n addons-362127
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 logs -n 25: (1.314118448s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-196061                                                                     | download-only-196061 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-451721                                                                     | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-832339                                                                     | download-only-832339 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-196061                                                                     | download-only-196061 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-224708 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | binary-mirror-224708                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42063                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-224708                                                                     | binary-mirror-224708 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-362127 --wait=true                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | -p addons-362127                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | -p addons-362127                                                                            |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-362127 ip                                                                            | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-362127 ssh cat                                                                       | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | /opt/local-path-provisioner/pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:32 UTC |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-362127 ssh curl -s                                                                   | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-362127 addons                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC | 22 Jul 24 10:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-362127 addons                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC | 22 Jul 24 10:32 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-362127 ip                                                                            | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:29:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:29:19.589001   14017 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:29:19.589248   14017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:19.589258   14017 out.go:304] Setting ErrFile to fd 2...
	I0722 10:29:19.589262   14017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:19.589451   14017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:29:19.590019   14017 out.go:298] Setting JSON to false
	I0722 10:29:19.590810   14017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":712,"bootTime":1721643448,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:29:19.590875   14017 start.go:139] virtualization: kvm guest
	I0722 10:29:19.592705   14017 out.go:177] * [addons-362127] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:29:19.593814   14017 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:29:19.593808   14017 notify.go:220] Checking for updates...
	I0722 10:29:19.596165   14017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:29:19.597386   14017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:29:19.598534   14017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:19.599512   14017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:29:19.600526   14017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:29:19.601749   14017 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:29:19.632490   14017 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 10:29:19.633636   14017 start.go:297] selected driver: kvm2
	I0722 10:29:19.633659   14017 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:29:19.633672   14017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:29:19.634320   14017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:19.634391   14017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:29:19.648637   14017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:29:19.648680   14017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:29:19.648931   14017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:29:19.648997   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:19.649013   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:19.649026   14017 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 10:29:19.649087   14017 start.go:340] cluster config:
	{Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:19.649216   14017 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:19.650908   14017 out.go:177] * Starting "addons-362127" primary control-plane node in "addons-362127" cluster
	I0722 10:29:19.652097   14017 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:19.652136   14017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:19.652146   14017 cache.go:56] Caching tarball of preloaded images
	I0722 10:29:19.652238   14017 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:29:19.652251   14017 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:29:19.652579   14017 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json ...
	I0722 10:29:19.652607   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json: {Name:mkc892ee9b8d8fe87cfad510947acbb2a73e77b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:19.652757   14017 start.go:360] acquireMachinesLock for addons-362127: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:29:19.652835   14017 start.go:364] duration metric: took 62.749µs to acquireMachinesLock for "addons-362127"
	I0722 10:29:19.652859   14017 start.go:93] Provisioning new machine with config: &{Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:29:19.652940   14017 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 10:29:19.654399   14017 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0722 10:29:19.654528   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:29:19.654569   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:29:19.668054   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0722 10:29:19.668473   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:29:19.668987   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:29:19.669009   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:29:19.669274   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:29:19.669437   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:19.669575   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:19.669720   14017 start.go:159] libmachine.API.Create for "addons-362127" (driver="kvm2")
	I0722 10:29:19.669743   14017 client.go:168] LocalClient.Create starting
	I0722 10:29:19.669771   14017 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:29:20.171755   14017 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:29:20.254166   14017 main.go:141] libmachine: Running pre-create checks...
	I0722 10:29:20.254185   14017 main.go:141] libmachine: (addons-362127) Calling .PreCreateCheck
	I0722 10:29:20.254643   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:20.255041   14017 main.go:141] libmachine: Creating machine...
	I0722 10:29:20.255054   14017 main.go:141] libmachine: (addons-362127) Calling .Create
	I0722 10:29:20.255210   14017 main.go:141] libmachine: (addons-362127) Creating KVM machine...
	I0722 10:29:20.256548   14017 main.go:141] libmachine: (addons-362127) DBG | found existing default KVM network
	I0722 10:29:20.257226   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.257104   14039 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0722 10:29:20.257269   14017 main.go:141] libmachine: (addons-362127) DBG | created network xml: 
	I0722 10:29:20.257289   14017 main.go:141] libmachine: (addons-362127) DBG | <network>
	I0722 10:29:20.257300   14017 main.go:141] libmachine: (addons-362127) DBG |   <name>mk-addons-362127</name>
	I0722 10:29:20.257311   14017 main.go:141] libmachine: (addons-362127) DBG |   <dns enable='no'/>
	I0722 10:29:20.257321   14017 main.go:141] libmachine: (addons-362127) DBG |   
	I0722 10:29:20.257331   14017 main.go:141] libmachine: (addons-362127) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 10:29:20.257342   14017 main.go:141] libmachine: (addons-362127) DBG |     <dhcp>
	I0722 10:29:20.257352   14017 main.go:141] libmachine: (addons-362127) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 10:29:20.257364   14017 main.go:141] libmachine: (addons-362127) DBG |     </dhcp>
	I0722 10:29:20.257376   14017 main.go:141] libmachine: (addons-362127) DBG |   </ip>
	I0722 10:29:20.257387   14017 main.go:141] libmachine: (addons-362127) DBG |   
	I0722 10:29:20.257395   14017 main.go:141] libmachine: (addons-362127) DBG | </network>
	I0722 10:29:20.257408   14017 main.go:141] libmachine: (addons-362127) DBG | 
	I0722 10:29:20.262685   14017 main.go:141] libmachine: (addons-362127) DBG | trying to create private KVM network mk-addons-362127 192.168.39.0/24...
	I0722 10:29:20.326271   14017 main.go:141] libmachine: (addons-362127) DBG | private KVM network mk-addons-362127 192.168.39.0/24 created
	I0722 10:29:20.326300   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.326251   14039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:20.326326   14017 main.go:141] libmachine: (addons-362127) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 ...
	I0722 10:29:20.326343   14017 main.go:141] libmachine: (addons-362127) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:29:20.326429   14017 main.go:141] libmachine: (addons-362127) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:29:20.561832   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.561691   14039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa...
	I0722 10:29:20.676096   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.676002   14039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/addons-362127.rawdisk...
	I0722 10:29:20.676124   14017 main.go:141] libmachine: (addons-362127) DBG | Writing magic tar header
	I0722 10:29:20.676145   14017 main.go:141] libmachine: (addons-362127) DBG | Writing SSH key tar header
	I0722 10:29:20.676204   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.676137   14039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 ...
	I0722 10:29:20.676276   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127
	I0722 10:29:20.676297   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:29:20.676310   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 (perms=drwx------)
	I0722 10:29:20.676327   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:29:20.676333   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:29:20.676345   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:29:20.676351   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:20.676364   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:29:20.676374   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:29:20.676406   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:29:20.676427   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:29:20.676436   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:29:20.676454   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home
	I0722 10:29:20.676467   14017 main.go:141] libmachine: (addons-362127) Creating domain...
	I0722 10:29:20.676476   14017 main.go:141] libmachine: (addons-362127) DBG | Skipping /home - not owner
	I0722 10:29:20.677387   14017 main.go:141] libmachine: (addons-362127) define libvirt domain using xml: 
	I0722 10:29:20.677407   14017 main.go:141] libmachine: (addons-362127) <domain type='kvm'>
	I0722 10:29:20.677417   14017 main.go:141] libmachine: (addons-362127)   <name>addons-362127</name>
	I0722 10:29:20.677424   14017 main.go:141] libmachine: (addons-362127)   <memory unit='MiB'>4000</memory>
	I0722 10:29:20.677432   14017 main.go:141] libmachine: (addons-362127)   <vcpu>2</vcpu>
	I0722 10:29:20.677439   14017 main.go:141] libmachine: (addons-362127)   <features>
	I0722 10:29:20.677448   14017 main.go:141] libmachine: (addons-362127)     <acpi/>
	I0722 10:29:20.677458   14017 main.go:141] libmachine: (addons-362127)     <apic/>
	I0722 10:29:20.677467   14017 main.go:141] libmachine: (addons-362127)     <pae/>
	I0722 10:29:20.677476   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.677484   14017 main.go:141] libmachine: (addons-362127)   </features>
	I0722 10:29:20.677497   14017 main.go:141] libmachine: (addons-362127)   <cpu mode='host-passthrough'>
	I0722 10:29:20.677508   14017 main.go:141] libmachine: (addons-362127)   
	I0722 10:29:20.677527   14017 main.go:141] libmachine: (addons-362127)   </cpu>
	I0722 10:29:20.677538   14017 main.go:141] libmachine: (addons-362127)   <os>
	I0722 10:29:20.677544   14017 main.go:141] libmachine: (addons-362127)     <type>hvm</type>
	I0722 10:29:20.677553   14017 main.go:141] libmachine: (addons-362127)     <boot dev='cdrom'/>
	I0722 10:29:20.677564   14017 main.go:141] libmachine: (addons-362127)     <boot dev='hd'/>
	I0722 10:29:20.677577   14017 main.go:141] libmachine: (addons-362127)     <bootmenu enable='no'/>
	I0722 10:29:20.677592   14017 main.go:141] libmachine: (addons-362127)   </os>
	I0722 10:29:20.677627   14017 main.go:141] libmachine: (addons-362127)   <devices>
	I0722 10:29:20.677659   14017 main.go:141] libmachine: (addons-362127)     <disk type='file' device='cdrom'>
	I0722 10:29:20.677682   14017 main.go:141] libmachine: (addons-362127)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/boot2docker.iso'/>
	I0722 10:29:20.677696   14017 main.go:141] libmachine: (addons-362127)       <target dev='hdc' bus='scsi'/>
	I0722 10:29:20.677721   14017 main.go:141] libmachine: (addons-362127)       <readonly/>
	I0722 10:29:20.677740   14017 main.go:141] libmachine: (addons-362127)     </disk>
	I0722 10:29:20.677757   14017 main.go:141] libmachine: (addons-362127)     <disk type='file' device='disk'>
	I0722 10:29:20.677771   14017 main.go:141] libmachine: (addons-362127)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:29:20.677789   14017 main.go:141] libmachine: (addons-362127)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/addons-362127.rawdisk'/>
	I0722 10:29:20.677804   14017 main.go:141] libmachine: (addons-362127)       <target dev='hda' bus='virtio'/>
	I0722 10:29:20.677818   14017 main.go:141] libmachine: (addons-362127)     </disk>
	I0722 10:29:20.677840   14017 main.go:141] libmachine: (addons-362127)     <interface type='network'>
	I0722 10:29:20.677861   14017 main.go:141] libmachine: (addons-362127)       <source network='mk-addons-362127'/>
	I0722 10:29:20.677875   14017 main.go:141] libmachine: (addons-362127)       <model type='virtio'/>
	I0722 10:29:20.677886   14017 main.go:141] libmachine: (addons-362127)     </interface>
	I0722 10:29:20.677902   14017 main.go:141] libmachine: (addons-362127)     <interface type='network'>
	I0722 10:29:20.677916   14017 main.go:141] libmachine: (addons-362127)       <source network='default'/>
	I0722 10:29:20.677944   14017 main.go:141] libmachine: (addons-362127)       <model type='virtio'/>
	I0722 10:29:20.677966   14017 main.go:141] libmachine: (addons-362127)     </interface>
	I0722 10:29:20.677979   14017 main.go:141] libmachine: (addons-362127)     <serial type='pty'>
	I0722 10:29:20.677992   14017 main.go:141] libmachine: (addons-362127)       <target port='0'/>
	I0722 10:29:20.678004   14017 main.go:141] libmachine: (addons-362127)     </serial>
	I0722 10:29:20.678014   14017 main.go:141] libmachine: (addons-362127)     <console type='pty'>
	I0722 10:29:20.678045   14017 main.go:141] libmachine: (addons-362127)       <target type='serial' port='0'/>
	I0722 10:29:20.678060   14017 main.go:141] libmachine: (addons-362127)     </console>
	I0722 10:29:20.678072   14017 main.go:141] libmachine: (addons-362127)     <rng model='virtio'>
	I0722 10:29:20.678083   14017 main.go:141] libmachine: (addons-362127)       <backend model='random'>/dev/random</backend>
	I0722 10:29:20.678094   14017 main.go:141] libmachine: (addons-362127)     </rng>
	I0722 10:29:20.678101   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.678111   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.678118   14017 main.go:141] libmachine: (addons-362127)   </devices>
	I0722 10:29:20.678139   14017 main.go:141] libmachine: (addons-362127) </domain>
	I0722 10:29:20.678155   14017 main.go:141] libmachine: (addons-362127) 
	I0722 10:29:20.683444   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:6d:18:a7 in network default
	I0722 10:29:20.683944   14017 main.go:141] libmachine: (addons-362127) Ensuring networks are active...
	I0722 10:29:20.683971   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:20.684568   14017 main.go:141] libmachine: (addons-362127) Ensuring network default is active
	I0722 10:29:20.684882   14017 main.go:141] libmachine: (addons-362127) Ensuring network mk-addons-362127 is active
	I0722 10:29:20.685373   14017 main.go:141] libmachine: (addons-362127) Getting domain xml...
	I0722 10:29:20.685992   14017 main.go:141] libmachine: (addons-362127) Creating domain...
	I0722 10:29:22.048532   14017 main.go:141] libmachine: (addons-362127) Waiting to get IP...
	I0722 10:29:22.049438   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.049871   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.049920   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.049865   14039 retry.go:31] will retry after 296.885308ms: waiting for machine to come up
	I0722 10:29:22.348410   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.348764   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.348802   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.348738   14039 retry.go:31] will retry after 341.960078ms: waiting for machine to come up
	I0722 10:29:22.692189   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.692703   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.692729   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.692652   14039 retry.go:31] will retry after 480.197578ms: waiting for machine to come up
	I0722 10:29:23.174095   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:23.174562   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:23.174589   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:23.174507   14039 retry.go:31] will retry after 471.102584ms: waiting for machine to come up
	I0722 10:29:23.646990   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:23.647460   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:23.647492   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:23.647417   14039 retry.go:31] will retry after 673.342516ms: waiting for machine to come up
	I0722 10:29:24.322298   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:24.322654   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:24.322673   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:24.322629   14039 retry.go:31] will retry after 625.787153ms: waiting for machine to come up
	I0722 10:29:24.949957   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:24.950287   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:24.950312   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:24.950238   14039 retry.go:31] will retry after 827.528686ms: waiting for machine to come up
	I0722 10:29:25.778949   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:25.779309   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:25.779329   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:25.779274   14039 retry.go:31] will retry after 1.408983061s: waiting for machine to come up
	I0722 10:29:27.189800   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:27.190195   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:27.190223   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:27.190147   14039 retry.go:31] will retry after 1.767432679s: waiting for machine to come up
	I0722 10:29:28.960519   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:28.960927   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:28.960956   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:28.960876   14039 retry.go:31] will retry after 2.263225443s: waiting for machine to come up
	I0722 10:29:31.225552   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:31.225965   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:31.225990   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:31.225929   14039 retry.go:31] will retry after 2.324899366s: waiting for machine to come up
	I0722 10:29:33.553341   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:33.553655   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:33.553679   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:33.553622   14039 retry.go:31] will retry after 3.136063412s: waiting for machine to come up
	I0722 10:29:36.692416   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:36.692887   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:36.692914   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:36.692823   14039 retry.go:31] will retry after 4.388122313s: waiting for machine to come up
	I0722 10:29:41.082901   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.083364   14017 main.go:141] libmachine: (addons-362127) Found IP for machine: 192.168.39.147
	I0722 10:29:41.083385   14017 main.go:141] libmachine: (addons-362127) Reserving static IP address...
	I0722 10:29:41.083398   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has current primary IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.083760   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find host DHCP lease matching {name: "addons-362127", mac: "52:54:00:5d:13:55", ip: "192.168.39.147"} in network mk-addons-362127
	I0722 10:29:41.150169   14017 main.go:141] libmachine: (addons-362127) DBG | Getting to WaitForSSH function...
	I0722 10:29:41.150199   14017 main.go:141] libmachine: (addons-362127) Reserved static IP address: 192.168.39.147
	I0722 10:29:41.150213   14017 main.go:141] libmachine: (addons-362127) Waiting for SSH to be available...
	I0722 10:29:41.152466   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.152865   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.152893   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.153026   14017 main.go:141] libmachine: (addons-362127) DBG | Using SSH client type: external
	I0722 10:29:41.153051   14017 main.go:141] libmachine: (addons-362127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa (-rw-------)
	I0722 10:29:41.153083   14017 main.go:141] libmachine: (addons-362127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:29:41.153094   14017 main.go:141] libmachine: (addons-362127) DBG | About to run SSH command:
	I0722 10:29:41.153138   14017 main.go:141] libmachine: (addons-362127) DBG | exit 0
	I0722 10:29:41.279873   14017 main.go:141] libmachine: (addons-362127) DBG | SSH cmd err, output: <nil>: 
	I0722 10:29:41.280097   14017 main.go:141] libmachine: (addons-362127) KVM machine creation complete!
	I0722 10:29:41.280367   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:41.280893   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:41.281078   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:41.281214   14017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:29:41.281230   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:29:41.282290   14017 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:29:41.282300   14017 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:29:41.282306   14017 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:29:41.282311   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.284712   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.285071   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.285094   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.285229   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.285384   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.285516   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.285642   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.285834   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.286113   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.286127   14017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:29:41.379473   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:29:41.379498   14017 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:29:41.379509   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.382453   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.382816   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.382841   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.383021   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.383222   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.383386   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.383540   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.383697   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.383869   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.383880   14017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:29:41.480602   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:29:41.480672   14017 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:29:41.480685   14017 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:29:41.480699   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.480945   14017 buildroot.go:166] provisioning hostname "addons-362127"
	I0722 10:29:41.480973   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.481174   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.483646   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.483928   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.483949   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.484095   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.484281   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.484455   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.484595   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.484765   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.484923   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.484936   14017 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-362127 && echo "addons-362127" | sudo tee /etc/hostname
	I0722 10:29:41.594611   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-362127
	
	I0722 10:29:41.594633   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.596974   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.597258   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.597307   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.597427   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.597622   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.597756   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.597961   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.598082   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.598254   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.598276   14017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-362127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-362127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-362127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:29:41.700715   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:29:41.700741   14017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:29:41.700791   14017 buildroot.go:174] setting up certificates
	I0722 10:29:41.700804   14017 provision.go:84] configureAuth start
	I0722 10:29:41.700822   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.701089   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:41.703475   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.703794   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.703821   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.703967   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.706006   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.706317   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.706337   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.706506   14017 provision.go:143] copyHostCerts
	I0722 10:29:41.706581   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:29:41.706698   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:29:41.706778   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:29:41.706847   14017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.addons-362127 san=[127.0.0.1 192.168.39.147 addons-362127 localhost minikube]
	I0722 10:29:41.894425   14017 provision.go:177] copyRemoteCerts
	I0722 10:29:41.894477   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:29:41.894500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.897006   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.897330   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.897354   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.897492   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.897655   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.897786   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.897909   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:41.973624   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:29:41.996692   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:29:42.018990   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:29:42.041253   14017 provision.go:87] duration metric: took 340.435418ms to configureAuth
	I0722 10:29:42.041273   14017 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:29:42.041436   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:29:42.041512   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.043838   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.044105   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.044136   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.044276   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.044459   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.044602   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.044744   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.044906   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:42.045048   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:42.045060   14017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:29:42.291411   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:29:42.291433   14017 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:29:42.291441   14017 main.go:141] libmachine: (addons-362127) Calling .GetURL
	I0722 10:29:42.292571   14017 main.go:141] libmachine: (addons-362127) DBG | Using libvirt version 6000000
	I0722 10:29:42.294571   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.294826   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.294850   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.295023   14017 main.go:141] libmachine: Docker is up and running!
	I0722 10:29:42.295047   14017 main.go:141] libmachine: Reticulating splines...
	I0722 10:29:42.295053   14017 client.go:171] duration metric: took 22.625304136s to LocalClient.Create
	I0722 10:29:42.295073   14017 start.go:167] duration metric: took 22.625352131s to libmachine.API.Create "addons-362127"
	I0722 10:29:42.295086   14017 start.go:293] postStartSetup for "addons-362127" (driver="kvm2")
	I0722 10:29:42.295099   14017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:29:42.295115   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.295351   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:29:42.295387   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.297207   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.297511   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.297540   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.297634   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.297806   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.297966   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.298099   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.374402   14017 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:29:42.378378   14017 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:29:42.378404   14017 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:29:42.378462   14017 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:29:42.378487   14017 start.go:296] duration metric: took 83.3928ms for postStartSetup
	I0722 10:29:42.378511   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:42.378959   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:42.381379   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.381820   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.381845   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.382096   14017 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json ...
	I0722 10:29:42.382309   14017 start.go:128] duration metric: took 22.729356958s to createHost
	I0722 10:29:42.382333   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.384569   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.384862   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.384889   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.385044   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.385202   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.385363   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.385500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.385626   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:42.385776   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:42.385786   14017 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:29:42.480657   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721644182.455604273
	
	I0722 10:29:42.480680   14017 fix.go:216] guest clock: 1721644182.455604273
	I0722 10:29:42.480690   14017 fix.go:229] Guest: 2024-07-22 10:29:42.455604273 +0000 UTC Remote: 2024-07-22 10:29:42.382323527 +0000 UTC m=+22.826470222 (delta=73.280746ms)
	I0722 10:29:42.480731   14017 fix.go:200] guest clock delta is within tolerance: 73.280746ms
	I0722 10:29:42.480736   14017 start.go:83] releasing machines lock for "addons-362127", held for 22.827889547s
	I0722 10:29:42.480757   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.481015   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:42.483354   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.483723   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.483748   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.483904   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484400   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484561   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484665   14017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:29:42.484716   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.484749   14017 ssh_runner.go:195] Run: cat /version.json
	I0722 10:29:42.484771   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.487283   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487438   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487557   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.487581   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487738   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.487879   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.487896   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.487908   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.488036   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.488089   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.488171   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.488214   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.488285   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.488420   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.585533   14017 ssh_runner.go:195] Run: systemctl --version
	I0722 10:29:42.591184   14017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:29:42.745296   14017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:29:42.750982   14017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:29:42.751031   14017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:29:42.767021   14017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:29:42.767041   14017 start.go:495] detecting cgroup driver to use...
	I0722 10:29:42.767097   14017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:29:42.783278   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:29:42.797108   14017 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:29:42.797156   14017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:29:42.810143   14017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:29:42.823176   14017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:29:42.936166   14017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:29:43.069095   14017 docker.go:233] disabling docker service ...
	I0722 10:29:43.069160   14017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:29:43.083237   14017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:29:43.095562   14017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:29:43.228490   14017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:29:43.343384   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:29:43.357392   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:29:43.374871   14017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:29:43.374932   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.385318   14017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:29:43.385375   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.395737   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.405878   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.415804   14017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:29:43.425968   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.435811   14017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.452530   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.462700   14017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:29:43.472039   14017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:29:43.472084   14017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:29:43.484344   14017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:29:43.493495   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:29:43.608656   14017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:29:43.745676   14017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:29:43.745759   14017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:29:43.750557   14017 start.go:563] Will wait 60s for crictl version
	I0722 10:29:43.750610   14017 ssh_runner.go:195] Run: which crictl
	I0722 10:29:43.754165   14017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:29:43.789788   14017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:29:43.789900   14017 ssh_runner.go:195] Run: crio --version
	I0722 10:29:43.816975   14017 ssh_runner.go:195] Run: crio --version
	I0722 10:29:43.848783   14017 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:29:43.849976   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:43.852269   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:43.852632   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:43.852660   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:43.852835   14017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:29:43.856776   14017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:29:43.868436   14017 kubeadm.go:883] updating cluster {Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:29:43.868534   14017 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:43.868576   14017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:29:43.901435   14017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 10:29:43.901482   14017 ssh_runner.go:195] Run: which lz4
	I0722 10:29:43.905129   14017 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 10:29:43.908916   14017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 10:29:43.908936   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 10:29:45.190696   14017 crio.go:462] duration metric: took 1.285590031s to copy over tarball
	I0722 10:29:45.190794   14017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 10:29:47.408165   14017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217340659s)
	I0722 10:29:47.408191   14017 crio.go:469] duration metric: took 2.217463481s to extract the tarball
	I0722 10:29:47.408199   14017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 10:29:47.452401   14017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:29:47.493848   14017 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:29:47.493889   14017 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:29:47.493899   14017 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.3 crio true true} ...
	I0722 10:29:47.494023   14017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-362127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:29:47.494108   14017 ssh_runner.go:195] Run: crio config
	I0722 10:29:47.538075   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:47.538097   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:47.538115   14017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:29:47.538152   14017 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-362127 NodeName:addons-362127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:29:47.538319   14017 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-362127"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:29:47.538390   14017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:29:47.548608   14017 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:29:47.548661   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 10:29:47.558020   14017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:29:47.573925   14017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:29:47.589354   14017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 10:29:47.604625   14017 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0722 10:29:47.608574   14017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:29:47.620223   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:29:47.723199   14017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:29:47.740047   14017 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127 for IP: 192.168.39.147
	I0722 10:29:47.740072   14017 certs.go:194] generating shared ca certs ...
	I0722 10:29:47.740096   14017 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.740246   14017 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:29:47.874497   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt ...
	I0722 10:29:47.874531   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt: {Name:mke882a38fe6f483e6530028b8df28144d29a855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.874703   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key ...
	I0722 10:29:47.874717   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key: {Name:mkf540d4917bbffc298d8aa1a4169d65a42a8673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.874812   14017 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:29:47.973344   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt ...
	I0722 10:29:47.973374   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt: {Name:mke2b7b72f11e82846972309d55ed3d0e72012b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.973545   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key ...
	I0722 10:29:47.973560   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key: {Name:mk744638ea69c3f6193a23844c6a68538dfb44a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.973663   14017 certs.go:256] generating profile certs ...
	I0722 10:29:47.973731   14017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key
	I0722 10:29:47.973749   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt with IP's: []
	I0722 10:29:48.236064   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt ...
	I0722 10:29:48.236093   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: {Name:mkfc010ff291afc7aee26ac16e832d5f514edb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.236261   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key ...
	I0722 10:29:48.236275   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key: {Name:mkf0c231b4b54ef7c9316e71266a716bdfb49393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.236367   14017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed
	I0722 10:29:48.236405   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147]
	I0722 10:29:48.468176   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed ...
	I0722 10:29:48.468208   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed: {Name:mk921cfb0bc1062e3295be5c5ec1a1e46daf48a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.468373   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed ...
	I0722 10:29:48.468409   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed: {Name:mkf336033ff12e37cb73c650a52c869b86c144ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.468506   14017 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt
	I0722 10:29:48.468596   14017 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key
	I0722 10:29:48.468662   14017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key
	I0722 10:29:48.468684   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt with IP's: []
	I0722 10:29:48.766613   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt ...
	I0722 10:29:48.766643   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt: {Name:mk22b627da338fc6b9d9dd57a7688665d43c25aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.766810   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key ...
	I0722 10:29:48.766824   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key: {Name:mk8bfd98994daf8915ba3441b0b1840e2d93aebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.767011   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:29:48.767055   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:29:48.767089   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:29:48.767124   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:29:48.767671   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:29:48.793320   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:29:48.819848   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:29:48.848414   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:29:48.872347   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0722 10:29:48.895103   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 10:29:48.920678   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:29:48.945034   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 10:29:48.968049   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:29:48.991161   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:29:49.007024   14017 ssh_runner.go:195] Run: openssl version
	I0722 10:29:49.012452   14017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:29:49.022593   14017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.026899   14017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.026947   14017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.032557   14017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:29:49.042982   14017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:29:49.046988   14017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:29:49.047030   14017 kubeadm.go:392] StartCluster: {Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:49.047101   14017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:29:49.047162   14017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:29:49.088580   14017 cri.go:89] found id: ""
	I0722 10:29:49.088650   14017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 10:29:49.101780   14017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 10:29:49.110882   14017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 10:29:49.119802   14017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 10:29:49.119821   14017 kubeadm.go:157] found existing configuration files:
	
	I0722 10:29:49.119853   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 10:29:49.128787   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 10:29:49.128835   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 10:29:49.137716   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 10:29:49.146203   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 10:29:49.146242   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 10:29:49.155104   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 10:29:49.163468   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 10:29:49.163508   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 10:29:49.172239   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 10:29:49.180970   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 10:29:49.181022   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 10:29:49.189794   14017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 10:29:49.386269   14017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 10:29:59.753185   14017 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 10:29:59.753269   14017 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 10:29:59.753377   14017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 10:29:59.753516   14017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 10:29:59.753640   14017 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 10:29:59.753718   14017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 10:29:59.755928   14017 out.go:204]   - Generating certificates and keys ...
	I0722 10:29:59.756024   14017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 10:29:59.756115   14017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 10:29:59.756202   14017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 10:29:59.756281   14017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 10:29:59.756366   14017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 10:29:59.756445   14017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 10:29:59.756522   14017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 10:29:59.756676   14017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-362127 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0722 10:29:59.756738   14017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 10:29:59.756849   14017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-362127 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0722 10:29:59.756906   14017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 10:29:59.756959   14017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 10:29:59.757014   14017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 10:29:59.757086   14017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 10:29:59.757140   14017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 10:29:59.757191   14017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 10:29:59.757235   14017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 10:29:59.757325   14017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 10:29:59.757375   14017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 10:29:59.757441   14017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 10:29:59.757506   14017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 10:29:59.758899   14017 out.go:204]   - Booting up control plane ...
	I0722 10:29:59.758969   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 10:29:59.759063   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 10:29:59.759131   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 10:29:59.759258   14017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 10:29:59.759341   14017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 10:29:59.759381   14017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 10:29:59.759491   14017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 10:29:59.759554   14017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 10:29:59.759631   14017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.613572ms
	I0722 10:29:59.759738   14017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 10:29:59.759822   14017 kubeadm.go:310] [api-check] The API server is healthy after 5.501358528s
	I0722 10:29:59.759954   14017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 10:29:59.760096   14017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 10:29:59.760164   14017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 10:29:59.760326   14017 kubeadm.go:310] [mark-control-plane] Marking the node addons-362127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 10:29:59.760406   14017 kubeadm.go:310] [bootstrap-token] Using token: e88oa7.cou2ewfo3a53ksgg
	I0722 10:29:59.762449   14017 out.go:204]   - Configuring RBAC rules ...
	I0722 10:29:59.762541   14017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 10:29:59.762609   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 10:29:59.762714   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 10:29:59.762815   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 10:29:59.762915   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 10:29:59.762997   14017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 10:29:59.763103   14017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 10:29:59.763147   14017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 10:29:59.763186   14017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 10:29:59.763191   14017 kubeadm.go:310] 
	I0722 10:29:59.763257   14017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 10:29:59.763273   14017 kubeadm.go:310] 
	I0722 10:29:59.763338   14017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 10:29:59.763344   14017 kubeadm.go:310] 
	I0722 10:29:59.763382   14017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 10:29:59.763436   14017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 10:29:59.763478   14017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 10:29:59.763484   14017 kubeadm.go:310] 
	I0722 10:29:59.763527   14017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 10:29:59.763533   14017 kubeadm.go:310] 
	I0722 10:29:59.763571   14017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 10:29:59.763576   14017 kubeadm.go:310] 
	I0722 10:29:59.763618   14017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 10:29:59.763679   14017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 10:29:59.763735   14017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 10:29:59.763741   14017 kubeadm.go:310] 
	I0722 10:29:59.763814   14017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 10:29:59.763891   14017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 10:29:59.763896   14017 kubeadm.go:310] 
	I0722 10:29:59.763967   14017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e88oa7.cou2ewfo3a53ksgg \
	I0722 10:29:59.764054   14017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 10:29:59.764073   14017 kubeadm.go:310] 	--control-plane 
	I0722 10:29:59.764078   14017 kubeadm.go:310] 
	I0722 10:29:59.764147   14017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 10:29:59.764153   14017 kubeadm.go:310] 
	I0722 10:29:59.764224   14017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e88oa7.cou2ewfo3a53ksgg \
	I0722 10:29:59.764313   14017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 10:29:59.764327   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:59.764335   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:59.765783   14017 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 10:29:59.766932   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 10:29:59.777690   14017 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 10:29:59.795331   14017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 10:29:59.795411   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:29:59.795418   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-362127 minikube.k8s.io/updated_at=2024_07_22T10_29_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=addons-362127 minikube.k8s.io/primary=true
	I0722 10:29:59.923266   14017 ops.go:34] apiserver oom_adj: -16
	I0722 10:29:59.923425   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:00.424211   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:00.924146   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:01.424410   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:01.924476   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:02.424256   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:02.924350   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:03.424400   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:03.924254   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:04.424118   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:04.924332   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:05.423733   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:05.924069   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:06.423929   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:06.923488   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:07.423545   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:07.924186   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:08.424363   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:08.924306   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:09.423491   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:09.924332   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:10.423814   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:10.923525   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:11.423464   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:11.923450   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:12.423532   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:12.923460   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:13.002799   14017 kubeadm.go:1113] duration metric: took 13.207457539s to wait for elevateKubeSystemPrivileges
	I0722 10:30:13.002839   14017 kubeadm.go:394] duration metric: took 23.955811499s to StartCluster
	I0722 10:30:13.002857   14017 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:30:13.002982   14017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:30:13.003351   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:30:13.003535   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 10:30:13.003557   14017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:30:13.003637   14017 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0722 10:30:13.003767   14017 addons.go:69] Setting yakd=true in profile "addons-362127"
	I0722 10:30:13.003816   14017 addons.go:234] Setting addon yakd=true in "addons-362127"
	I0722 10:30:13.003856   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003866   14017 addons.go:69] Setting ingress-dns=true in profile "addons-362127"
	I0722 10:30:13.003908   14017 addons.go:234] Setting addon ingress-dns=true in "addons-362127"
	I0722 10:30:13.003916   14017 addons.go:69] Setting cloud-spanner=true in profile "addons-362127"
	I0722 10:30:13.003934   14017 addons.go:234] Setting addon cloud-spanner=true in "addons-362127"
	I0722 10:30:13.003938   14017 addons.go:69] Setting registry=true in profile "addons-362127"
	I0722 10:30:13.003949   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003961   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003963   14017 addons.go:69] Setting gcp-auth=true in profile "addons-362127"
	I0722 10:30:13.003973   14017 addons.go:234] Setting addon registry=true in "addons-362127"
	I0722 10:30:13.003985   14017 mustload.go:65] Loading cluster: addons-362127
	I0722 10:30:13.004000   14017 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-362127"
	I0722 10:30:13.004015   14017 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-362127"
	I0722 10:30:13.004026   14017 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-362127"
	I0722 10:30:13.004040   14017 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-362127"
	I0722 10:30:13.004065   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004170   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:30:13.003938   14017 addons.go:69] Setting helm-tiller=true in profile "addons-362127"
	I0722 10:30:13.004332   14017 addons.go:69] Setting volcano=true in profile "addons-362127"
	I0722 10:30:13.004346   14017 addons.go:234] Setting addon helm-tiller=true in "addons-362127"
	I0722 10:30:13.004355   14017 addons.go:234] Setting addon volcano=true in "addons-362127"
	I0722 10:30:13.004359   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004369   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004395   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004405   14017 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-362127"
	I0722 10:30:13.004408   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004427   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004443   14017 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-362127"
	I0722 10:30:13.004467   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004483   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004503   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004551   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004577   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.003949   14017 addons.go:69] Setting metrics-server=true in profile "addons-362127"
	I0722 10:30:13.004627   14017 addons.go:69] Setting storage-provisioner=true in profile "addons-362127"
	I0722 10:30:13.004654   14017 addons.go:234] Setting addon storage-provisioner=true in "addons-362127"
	I0722 10:30:13.004655   14017 addons.go:234] Setting addon metrics-server=true in "addons-362127"
	I0722 10:30:13.004683   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004701   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004706   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004722   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004768   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004799   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004806   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004822   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005064   14017 addons.go:69] Setting volumesnapshots=true in profile "addons-362127"
	I0722 10:30:13.005086   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005092   14017 addons.go:234] Setting addon volumesnapshots=true in "addons-362127"
	I0722 10:30:13.005106   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005114   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005125   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.005133   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004399   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005199   14017 addons.go:69] Setting inspektor-gadget=true in profile "addons-362127"
	I0722 10:30:13.005203   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004347   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005221   14017 addons.go:234] Setting addon inspektor-gadget=true in "addons-362127"
	I0722 10:30:13.003902   14017 addons.go:69] Setting default-storageclass=true in profile "addons-362127"
	I0722 10:30:13.005251   14017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-362127"
	I0722 10:30:13.005092   14017 addons.go:69] Setting ingress=true in profile "addons-362127"
	I0722 10:30:13.005256   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005268   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005272   14017 addons.go:234] Setting addon ingress=true in "addons-362127"
	I0722 10:30:13.003713   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:30:13.004006   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.006509   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.006821   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.006862   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.006882   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.006901   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.007274   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.009237   14017 out.go:177] * Verifying Kubernetes components...
	I0722 10:30:13.017289   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.017378   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.017432   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:30:13.025313   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0722 10:30:13.026184   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.026741   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.026766   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.027097   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.027651   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.027683   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.030060   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0722 10:30:13.030251   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0722 10:30:13.030649   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.030729   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.031179   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.031194   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.031244   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.031267   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.031517   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.031570   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.032145   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.032167   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.032188   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.032197   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.033708   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0722 10:30:13.037649   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.037689   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.038330   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.038365   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.044585   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46675
	I0722 10:30:13.044699   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0722 10:30:13.044858   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I0722 10:30:13.044949   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0722 10:30:13.045514   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046005   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.046027   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.046547   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046692   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046758   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046841   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.048938   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.048957   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049088   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.049098   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049224   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.049234   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049288   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.049338   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0722 10:30:13.049765   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050321   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.050362   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.050666   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.050690   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.050757   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050788   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0722 10:30:13.050807   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050875   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.051116   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.051295   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.051443   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.051499   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.051537   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.052706   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.053111   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.053132   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.053545   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.054120   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.054155   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.054442   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.054790   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.054819   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.055760   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.055777   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.056277   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.056850   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.056883   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.057584   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.057621   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.057672   14017 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-362127"
	I0722 10:30:13.057726   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.058050   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.058076   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.059242   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0722 10:30:13.061004   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.061574   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.061590   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.061994   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.062628   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.062662   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.066808   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0722 10:30:13.067349   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.067903   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.067921   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.068299   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.068517   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.070594   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.072679   14017 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0722 10:30:13.074133   14017 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 10:30:13.074151   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0722 10:30:13.074172   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.077848   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.078433   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.078455   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.078646   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.078859   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.079072   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.079254   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.087075   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0722 10:30:13.087637   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.088145   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.088163   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.088698   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.089337   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.089377   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.099040   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0722 10:30:13.099819   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.100530   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.100550   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.101405   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.101698   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.103601   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.104345   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0722 10:30:13.104505   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I0722 10:30:13.105061   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.105496   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.105512   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.105921   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.106145   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0722 10:30:13.106184   14017 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0722 10:30:13.106646   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.106680   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.106873   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0722 10:30:13.107317   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.107705   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.107778   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0722 10:30:13.107789   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0722 10:30:13.107792   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0722 10:30:13.107810   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.108307   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.108323   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.108484   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.108610   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.108624   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.108949   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.109236   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.109277   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.109698   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.109713   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.110103   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.110141   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.110143   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0722 10:30:13.110246   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0722 10:30:13.110823   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.110859   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.111223   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.111266   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.111330   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.111363   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.111374   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.111397   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.111566   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0722 10:30:13.111658   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.111674   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.111701   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.111803   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.111815   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.111825   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.111972   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.112170   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.112226   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.112271   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.112463   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.113037   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.113054   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.113108   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0722 10:30:13.113243   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.113322   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.113599   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.113658   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0722 10:30:13.113821   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.113837   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.113880   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0722 10:30:13.113889   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0722 10:30:13.114230   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.114352   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.114390   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.114458   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.114642   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.114774   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.114788   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.114838   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.115728   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.115779   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.115858   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.115870   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.116465   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.116502   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.116692   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.116856   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.117346   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.117542   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.117685   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.117696   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.118346   14017 addons.go:234] Setting addon default-storageclass=true in "addons-362127"
	I0722 10:30:13.118379   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.118381   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.118750   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.118780   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.119520   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.119808   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.120102   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.120522   14017 out.go:177]   - Using image docker.io/registry:2.8.3
	I0722 10:30:13.120523   14017 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0722 10:30:13.120569   14017 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0722 10:30:13.120783   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.121809   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0722 10:30:13.122481   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.122498   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.122575   14017 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 10:30:13.122597   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0722 10:30:13.122620   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.122952   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.123194   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.123584   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.123831   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:13.123845   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:13.124537   14017 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0722 10:30:13.124605   14017 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0722 10:30:13.124552   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0722 10:30:13.124776   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.125979   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:13.126018   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:13.126034   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:13.126047   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:13.126054   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:13.126195   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.126475   14017 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0722 10:30:13.126486   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0722 10:30:13.126500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.126575   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:13.127846   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0722 10:30:13.128487   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.128995   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:13.129440   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0722 10:30:13.129960   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.130041   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.130060   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.130061   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0722 10:30:13.130199   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.130375   14017 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 10:30:13.130390   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0722 10:30:13.130400   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.130404   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.130411   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.130456   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.130612   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.130771   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.130823   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.130977   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.131272   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.131323   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.131763   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.131782   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.131817   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.131832   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.132172   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.132294   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.132599   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.132615   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.132929   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0722 10:30:13.133057   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.133109   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.133341   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.133589   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.133637   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.134125   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:13.134168   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:13.134176   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	W0722 10:30:13.134235   14017 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0722 10:30:13.134395   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0722 10:30:13.134986   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.135509   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.135526   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.136121   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.136457   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.136591   14017 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0722 10:30:13.136819   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.137211   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.137230   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.137355   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0722 10:30:13.137404   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0722 10:30:13.137478   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.137953   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I0722 10:30:13.137999   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.138033   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.138186   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.138336   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.138660   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.138672   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.138897   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 10:30:13.138913   14017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 10:30:13.138928   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.138952   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.139124   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.139357   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.139371   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.139434   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.139525   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.139978   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.140347   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0722 10:30:13.140487   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.141446   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0722 10:30:13.142043   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.142493   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0722 10:30:13.142550   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.142503   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0722 10:30:13.142573   14017 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0722 10:30:13.142598   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.142980   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.143001   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.143288   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.143708   14017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 10:30:13.144360   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.144669   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.144810   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.145103   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.145288   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0722 10:30:13.145371   14017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:30:13.145383   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 10:30:13.145397   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.146563   14017 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0722 10:30:13.146848   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.147341   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.147375   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.147651   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0722 10:30:13.147710   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.147717   14017 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0722 10:30:13.147729   14017 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0722 10:30:13.147746   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.147890   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.148224   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.148432   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.148790   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0722 10:30:13.148802   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0722 10:30:13.148817   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.149141   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.149773   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.149798   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.149972   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.150153   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.150312   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.150523   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.152536   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.152897   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153099   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.153125   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153334   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.153557   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.153581   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153609   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.153773   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.153830   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.153954   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.153994   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.154317   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.154449   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.156087   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0722 10:30:13.156415   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.156986   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.157002   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.157395   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.157518   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0722 10:30:13.157612   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0722 10:30:13.157739   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.157917   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.157992   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.158310   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.158328   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.158444   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.158458   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.158718   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.158780   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.158983   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.159395   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.159429   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.159528   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.160412   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.161528   14017 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	W0722 10:30:13.162057   14017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48638->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.162089   14017 retry.go:31] will retry after 222.201543ms: ssh: handshake failed: read tcp 192.168.39.1:48638->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.162694   14017 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0722 10:30:13.163523   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0722 10:30:13.163539   14017 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0722 10:30:13.163555   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.165419   14017 out.go:177]   - Using image docker.io/busybox:stable
	I0722 10:30:13.166545   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.166587   14017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 10:30:13.166608   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0722 10:30:13.166626   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.166984   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.167007   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.167175   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.167321   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.167470   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.167598   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.169565   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.169992   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.170015   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.170186   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.170333   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.170479   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.170595   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.195866   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0722 10:30:13.196331   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.197236   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.197257   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.197541   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.197692   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.199127   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.199318   14017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 10:30:13.199331   14017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 10:30:13.199344   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.202167   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.202558   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.202587   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.202738   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.202916   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.203046   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.203186   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	W0722 10:30:13.203833   14017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48668->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.203860   14017 retry.go:31] will retry after 358.673458ms: ssh: handshake failed: read tcp 192.168.39.1:48668->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.426152   14017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:30:13.426172   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 10:30:13.491038   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 10:30:13.506268   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0722 10:30:13.506292   14017 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0722 10:30:13.626071   14017 node_ready.go:35] waiting up to 6m0s for node "addons-362127" to be "Ready" ...
	I0722 10:30:13.629628   14017 node_ready.go:49] node "addons-362127" has status "Ready":"True"
	I0722 10:30:13.629650   14017 node_ready.go:38] duration metric: took 3.541335ms for node "addons-362127" to be "Ready" ...
	I0722 10:30:13.629659   14017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:30:13.637896   14017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:13.649818   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 10:30:13.705008   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0722 10:30:13.705037   14017 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0722 10:30:13.713558   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0722 10:30:13.713580   14017 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0722 10:30:13.727642   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 10:30:13.747900   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0722 10:30:13.762692   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 10:30:13.762712   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0722 10:30:13.763702   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0722 10:30:13.763718   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0722 10:30:13.782062   14017 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0722 10:30:13.782092   14017 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0722 10:30:13.796051   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:30:13.803750   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0722 10:30:13.803768   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0722 10:30:13.815953   14017 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0722 10:30:13.815974   14017 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0722 10:30:13.825042   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0722 10:30:13.886337   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 10:30:13.996265   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0722 10:30:13.996285   14017 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0722 10:30:13.999231   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0722 10:30:13.999241   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0722 10:30:14.047801   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0722 10:30:14.047831   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0722 10:30:14.078501   14017 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0722 10:30:14.078520   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0722 10:30:14.086619   14017 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0722 10:30:14.086638   14017 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0722 10:30:14.087686   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 10:30:14.087708   14017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 10:30:14.156761   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0722 10:30:14.156783   14017 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0722 10:30:14.203561   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 10:30:14.235008   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0722 10:30:14.235034   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0722 10:30:14.240487   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0722 10:30:14.279295   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0722 10:30:14.279320   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0722 10:30:14.290508   14017 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0722 10:30:14.290525   14017 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0722 10:30:14.331552   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0722 10:30:14.331575   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0722 10:30:14.382953   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 10:30:14.382984   14017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 10:30:14.416283   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0722 10:30:14.416310   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0722 10:30:14.464745   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0722 10:30:14.464765   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0722 10:30:14.536244   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0722 10:30:14.536275   14017 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0722 10:30:14.636723   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0722 10:30:14.636744   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0722 10:30:14.638495   14017 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0722 10:30:14.638512   14017 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0722 10:30:14.803141   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 10:30:14.806153   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0722 10:30:14.966748   14017 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:14.966772   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0722 10:30:14.979835   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0722 10:30:14.979867   14017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0722 10:30:14.996275   14017 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0722 10:30:14.996300   14017 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0722 10:30:15.338896   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:15.387338   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0722 10:30:15.387370   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0722 10:30:15.442765   14017 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0722 10:30:15.442792   14017 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0722 10:30:15.644473   14017 pod_ready.go:102] pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace has status "Ready":"False"
	I0722 10:30:15.708318   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0722 10:30:15.708340   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0722 10:30:15.750717   14017 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.324511248s)
	I0722 10:30:15.750753   14017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0722 10:30:15.763091   14017 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 10:30:15.763124   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0722 10:30:16.023332   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 10:30:16.023363   14017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0722 10:30:16.060541   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 10:30:16.255956   14017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-362127" context rescaled to 1 replicas
	I0722 10:30:16.294218   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 10:30:17.711255   14017 pod_ready.go:92] pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.711287   14017 pod_ready.go:81] duration metric: took 4.073364802s for pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.711301   14017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.812508   14017 pod_ready.go:92] pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.812532   14017 pod_ready.go:81] duration metric: took 101.223088ms for pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.812545   14017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.918052   14017 pod_ready.go:92] pod "etcd-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.918075   14017 pod_ready.go:81] duration metric: took 105.522311ms for pod "etcd-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.918086   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.971809   14017 pod_ready.go:92] pod "kube-apiserver-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.971832   14017 pod_ready.go:81] duration metric: took 53.738027ms for pod "kube-apiserver-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.971844   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.094589   14017 pod_ready.go:92] pod "kube-controller-manager-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.094616   14017 pod_ready.go:81] duration metric: took 122.763299ms for pod "kube-controller-manager-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.094629   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2bc4" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.228465   14017 pod_ready.go:92] pod "kube-proxy-w2bc4" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.228490   14017 pod_ready.go:81] duration metric: took 133.85389ms for pod "kube-proxy-w2bc4" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.228500   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.565987   14017 pod_ready.go:92] pod "kube-scheduler-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.566015   14017 pod_ready.go:81] duration metric: took 337.508324ms for pod "kube-scheduler-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.566026   14017 pod_ready.go:38] duration metric: took 4.936352102s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:30:18.566043   14017 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:30:18.566103   14017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:30:18.867368   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.376292276s)
	I0722 10:30:18.867416   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.217570187s)
	I0722 10:30:18.867427   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867444   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867452   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867464   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867476   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.139804731s)
	I0722 10:30:18.867525   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867539   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867523   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.119594513s)
	I0722 10:30:18.867562   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.071487883s)
	I0722 10:30:18.867573   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867579   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867585   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867589   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867666   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.042600149s)
	I0722 10:30:18.867693   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867704   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867974   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868021   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868177   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868200   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868224   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868027   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868048   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868048   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868071   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868345   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868365   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868399   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868082   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868701   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868712   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868720   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.869009   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869052   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869090   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869108   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868093   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869161   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.869181   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.869200   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868103   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869275   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869295   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.869909   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869943   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869952   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868110   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870080   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.870092   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.870103   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868120   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868130   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870156   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.870166   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.870174   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.870761   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870776   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.871244   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.871296   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.871321   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.871688   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.871734   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.871751   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:19.042794   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:19.042817   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:19.043230   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:19.043291   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:20.122398   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0722 10:30:20.122437   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:20.125843   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.126330   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:20.126358   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.126534   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:20.126750   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:20.126918   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:20.127069   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:20.637914   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0722 10:30:20.854680   14017 addons.go:234] Setting addon gcp-auth=true in "addons-362127"
	I0722 10:30:20.854727   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:20.855093   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:20.855136   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:20.870284   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0722 10:30:20.870713   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:20.871175   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:20.871192   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:20.871504   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:20.871996   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:20.872025   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:20.886670   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0722 10:30:20.887053   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:20.887510   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:20.887530   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:20.887866   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:20.888072   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:20.889626   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:20.889837   14017 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0722 10:30:20.889860   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:20.892433   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.892811   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:20.892835   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.892980   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:20.893178   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:20.893327   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:20.893482   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:21.734624   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.848243185s)
	I0722 10:30:21.734647   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.53104502s)
	I0722 10:30:21.734677   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734685   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734690   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734696   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734718   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.494202834s)
	I0722 10:30:21.734756   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734772   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734793   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.931623748s)
	I0722 10:30:21.734821   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734830   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734894   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.928713005s)
	I0722 10:30:21.735018   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735027   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735032   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735038   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735052   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735056   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735060   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735064   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735065   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735083   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735091   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735102   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735109   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735117   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735124   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735176   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735202   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735210   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735218   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735224   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735314   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735329   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735338   14017 addons.go:475] Verifying addon ingress=true in "addons-362127"
	I0722 10:30:21.735476   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735486   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735495   14017 addons.go:475] Verifying addon registry=true in "addons-362127"
	I0722 10:30:21.735529   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735597   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735644   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735664   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735860   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735925   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735946   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735976   14017 addons.go:475] Verifying addon metrics-server=true in "addons-362127"
	I0722 10:30:21.735556   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.737630   14017 out.go:177] * Verifying registry addon...
	I0722 10:30:21.738063   14017 out.go:177] * Verifying ingress addon...
	I0722 10:30:21.738291   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.738341   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.738383   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.738400   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.738408   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.738639   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.738655   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.740085   14017 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-362127 service yakd-dashboard -n yakd-dashboard
	
	I0722 10:30:21.740325   14017 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0722 10:30:21.740425   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0722 10:30:21.749906   14017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 10:30:21.749922   14017 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0722 10:30:21.749933   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:21.749930   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:21.768058   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.768081   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.768442   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.768487   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.768495   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.793332   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.454399323s)
	W0722 10:30:21.793382   14017 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 10:30:21.793421   14017 retry.go:31] will retry after 166.70586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 10:30:21.793462   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.732876447s)
	I0722 10:30:21.793510   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.793526   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.793792   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.793811   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.793826   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.793834   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.794046   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.794089   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.794104   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.960836   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:22.253459   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:22.254491   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:22.758511   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:22.766317   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:22.795283   14017 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.229154897s)
	I0722 10:30:22.795330   14017 api_server.go:72] duration metric: took 9.791741777s to wait for apiserver process to appear ...
	I0722 10:30:22.795340   14017 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:30:22.795364   14017 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0722 10:30:22.795356   14017 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.905497122s)
	I0722 10:30:22.795371   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.501103542s)
	I0722 10:30:22.795564   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:22.795580   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:22.795856   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:22.795880   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:22.795890   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:22.795926   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:22.796208   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:22.796222   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:22.796232   14017 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-362127"
	I0722 10:30:22.797378   14017 out.go:177] * Verifying csi-hostpath-driver addon...
	I0722 10:30:22.797377   14017 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0722 10:30:22.798969   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:22.799720   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0722 10:30:22.800175   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0722 10:30:22.800191   14017 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0722 10:30:22.831609   14017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 10:30:22.831629   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:22.849605   14017 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0722 10:30:22.853692   14017 api_server.go:141] control plane version: v1.30.3
	I0722 10:30:22.853719   14017 api_server.go:131] duration metric: took 58.372032ms to wait for apiserver health ...
	I0722 10:30:22.853730   14017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:30:22.879001   14017 system_pods.go:59] 19 kube-system pods found
	I0722 10:30:22.879029   14017 system_pods.go:61] "coredns-7db6d8ff4d-kdg7f" [24a11171-e5fb-488e-b75e-bbfffd042dc4] Running
	I0722 10:30:22.879034   14017 system_pods.go:61] "coredns-7db6d8ff4d-rdwgl" [10f869a5-d53d-4fc2-94d5-cab1e86811b8] Running
	I0722 10:30:22.879040   14017 system_pods.go:61] "csi-hostpath-attacher-0" [556914c5-386d-44c4-acde-a28f10ecd9a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0722 10:30:22.879045   14017 system_pods.go:61] "csi-hostpath-resizer-0" [ae0dd06b-0088-4667-a538-82fd9abe6baf] Pending
	I0722 10:30:22.879052   14017 system_pods.go:61] "csi-hostpathplugin-hhxpr" [bc97fa01-6616-4254-93df-9873804b1648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0722 10:30:22.879057   14017 system_pods.go:61] "etcd-addons-362127" [891099bd-687b-4464-8fe2-2d076f624f4f] Running
	I0722 10:30:22.879061   14017 system_pods.go:61] "kube-apiserver-addons-362127" [5a73f7d1-40d1-4d7a-adc9-58ad4eade2c4] Running
	I0722 10:30:22.879064   14017 system_pods.go:61] "kube-controller-manager-addons-362127" [98562678-7e43-4123-bb91-b800b0438089] Running
	I0722 10:30:22.879069   14017 system_pods.go:61] "kube-ingress-dns-minikube" [f2028cf5-46d0-41bc-b6b8-bc8e75607ab4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0722 10:30:22.879072   14017 system_pods.go:61] "kube-proxy-w2bc4" [fff33042-273b-43a2-b72e-7c8a8e6df754] Running
	I0722 10:30:22.879076   14017 system_pods.go:61] "kube-scheduler-addons-362127" [bbe6aea9-80e6-4242-9e26-782460721059] Running
	I0722 10:30:22.879080   14017 system_pods.go:61] "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 10:30:22.879086   14017 system_pods.go:61] "nvidia-device-plugin-daemonset-2k5sr" [2de5556d-cd17-43f7-ba1d-8cc5e131883f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0722 10:30:22.879094   14017 system_pods.go:61] "registry-656c9c8d9c-4sfgx" [b3bc8b0a-e99b-4bf9-aed3-da909aeab28c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0722 10:30:22.879098   14017 system_pods.go:61] "registry-proxy-7tgcs" [30014df8-8abc-48a5-85ce-7a4ab5e79732] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0722 10:30:22.879107   14017 system_pods.go:61] "snapshot-controller-745499f584-m5h79" [656ece8c-0bbc-4456-be78-2c1741b0719e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.879117   14017 system_pods.go:61] "snapshot-controller-745499f584-z65vw" [0a051515-d3ec-40cb-a825-f274b48a611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.879123   14017 system_pods.go:61] "storage-provisioner" [ca3da52f-e625-4fbf-8bf7-39f0bd596c5c] Running
	I0722 10:30:22.879128   14017 system_pods.go:61] "tiller-deploy-6677d64bcd-89cmg" [4311f07e-4fde-45b6-ab03-28badd1c17a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0722 10:30:22.879133   14017 system_pods.go:74] duration metric: took 25.398715ms to wait for pod list to return data ...
	I0722 10:30:22.879141   14017 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:30:22.884021   14017 default_sa.go:45] found service account: "default"
	I0722 10:30:22.884039   14017 default_sa.go:55] duration metric: took 4.890859ms for default service account to be created ...
	I0722 10:30:22.884047   14017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:30:22.909036   14017 system_pods.go:86] 19 kube-system pods found
	I0722 10:30:22.909061   14017 system_pods.go:89] "coredns-7db6d8ff4d-kdg7f" [24a11171-e5fb-488e-b75e-bbfffd042dc4] Running
	I0722 10:30:22.909068   14017 system_pods.go:89] "coredns-7db6d8ff4d-rdwgl" [10f869a5-d53d-4fc2-94d5-cab1e86811b8] Running
	I0722 10:30:22.909074   14017 system_pods.go:89] "csi-hostpath-attacher-0" [556914c5-386d-44c4-acde-a28f10ecd9a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0722 10:30:22.909082   14017 system_pods.go:89] "csi-hostpath-resizer-0" [ae0dd06b-0088-4667-a538-82fd9abe6baf] Pending
	I0722 10:30:22.909091   14017 system_pods.go:89] "csi-hostpathplugin-hhxpr" [bc97fa01-6616-4254-93df-9873804b1648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0722 10:30:22.909096   14017 system_pods.go:89] "etcd-addons-362127" [891099bd-687b-4464-8fe2-2d076f624f4f] Running
	I0722 10:30:22.909101   14017 system_pods.go:89] "kube-apiserver-addons-362127" [5a73f7d1-40d1-4d7a-adc9-58ad4eade2c4] Running
	I0722 10:30:22.909105   14017 system_pods.go:89] "kube-controller-manager-addons-362127" [98562678-7e43-4123-bb91-b800b0438089] Running
	I0722 10:30:22.909115   14017 system_pods.go:89] "kube-ingress-dns-minikube" [f2028cf5-46d0-41bc-b6b8-bc8e75607ab4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0722 10:30:22.909119   14017 system_pods.go:89] "kube-proxy-w2bc4" [fff33042-273b-43a2-b72e-7c8a8e6df754] Running
	I0722 10:30:22.909124   14017 system_pods.go:89] "kube-scheduler-addons-362127" [bbe6aea9-80e6-4242-9e26-782460721059] Running
	I0722 10:30:22.909129   14017 system_pods.go:89] "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 10:30:22.909136   14017 system_pods.go:89] "nvidia-device-plugin-daemonset-2k5sr" [2de5556d-cd17-43f7-ba1d-8cc5e131883f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0722 10:30:22.909144   14017 system_pods.go:89] "registry-656c9c8d9c-4sfgx" [b3bc8b0a-e99b-4bf9-aed3-da909aeab28c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0722 10:30:22.909152   14017 system_pods.go:89] "registry-proxy-7tgcs" [30014df8-8abc-48a5-85ce-7a4ab5e79732] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0722 10:30:22.909158   14017 system_pods.go:89] "snapshot-controller-745499f584-m5h79" [656ece8c-0bbc-4456-be78-2c1741b0719e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.909166   14017 system_pods.go:89] "snapshot-controller-745499f584-z65vw" [0a051515-d3ec-40cb-a825-f274b48a611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.909170   14017 system_pods.go:89] "storage-provisioner" [ca3da52f-e625-4fbf-8bf7-39f0bd596c5c] Running
	I0722 10:30:22.909176   14017 system_pods.go:89] "tiller-deploy-6677d64bcd-89cmg" [4311f07e-4fde-45b6-ab03-28badd1c17a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0722 10:30:22.909183   14017 system_pods.go:126] duration metric: took 25.13136ms to wait for k8s-apps to be running ...
	I0722 10:30:22.909190   14017 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:30:22.909232   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:30:23.005410   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0722 10:30:23.005434   14017 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0722 10:30:23.122393   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 10:30:23.122430   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0722 10:30:23.245456   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:23.245781   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:23.256888   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 10:30:23.305638   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:23.746594   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:23.749829   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:23.806014   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:24.265525   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:24.271197   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:24.340077   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:24.667430   14017 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.758169956s)
	I0722 10:30:24.667465   14017 system_svc.go:56] duration metric: took 1.758270111s WaitForService to wait for kubelet
	I0722 10:30:24.667476   14017 kubeadm.go:582] duration metric: took 11.663887113s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:30:24.667500   14017 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:30:24.667435   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.706563354s)
	I0722 10:30:24.667585   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:24.667604   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:24.667851   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:24.667856   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:24.667887   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:24.667900   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:24.667912   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:24.668136   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:24.668152   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:24.670158   14017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:30:24.670177   14017 node_conditions.go:123] node cpu capacity is 2
	I0722 10:30:24.670188   14017 node_conditions.go:105] duration metric: took 2.682507ms to run NodePressure ...
	I0722 10:30:24.670200   14017 start.go:241] waiting for startup goroutines ...
	I0722 10:30:24.744901   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:24.745329   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:24.808513   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:25.033452   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.776527126s)
	I0722 10:30:25.033513   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:25.033533   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:25.033822   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:25.033840   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:25.033849   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:25.033859   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:25.034089   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:25.034107   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:25.035514   14017 addons.go:475] Verifying addon gcp-auth=true in "addons-362127"
	I0722 10:30:25.036900   14017 out.go:177] * Verifying gcp-auth addon...
	I0722 10:30:25.038798   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0722 10:30:25.050487   14017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0722 10:30:25.050509   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:25.245914   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:25.246358   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:25.306026   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:25.543116   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:25.745609   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:25.746031   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:25.805369   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:26.041795   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:26.245866   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:26.247845   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:26.305545   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:26.542336   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:26.876142   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:26.876285   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:26.879274   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:27.042804   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:27.246859   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:27.247184   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:27.305268   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:27.543357   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:27.745784   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:27.747254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:27.806254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:28.042325   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:28.245450   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:28.246169   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:28.305166   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:28.543263   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:28.746659   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:28.746989   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:28.805493   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:29.044426   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:29.245559   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:29.249244   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:29.307159   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:29.542897   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:29.746515   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:29.753180   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:29.807348   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:30.042868   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:30.246567   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:30.246584   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:30.305137   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:30.543073   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:30.746622   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:30.746686   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:30.805599   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:31.365157   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:31.365316   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:31.366006   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:31.367186   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:31.542286   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:31.746482   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:31.747845   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:31.804953   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:32.042405   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:32.247637   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:32.247950   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:32.304971   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:32.543087   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:32.746162   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:32.747930   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:32.804439   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:33.041992   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:33.246694   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:33.246967   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:33.305261   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:33.543074   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:33.747514   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:33.747687   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:33.806069   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:34.043040   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:34.246018   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:34.247597   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:34.304936   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:34.542908   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:34.745659   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:34.747782   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:34.806416   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:35.042689   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:35.244923   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:35.246139   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:35.304655   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:35.543703   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:35.747035   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:35.747273   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:35.806360   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:36.043412   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:36.246144   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:36.246213   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:36.305576   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:36.543640   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:36.750085   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:36.750289   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:36.805463   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:37.043240   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:37.246757   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:37.246945   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:37.306475   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:37.543171   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:37.747058   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:37.747314   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:37.808188   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:38.042155   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:38.246425   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:38.249297   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:38.304982   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:38.543471   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:38.745586   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:38.748301   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:38.804592   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:39.042707   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:39.246090   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:39.246332   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:39.307764   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:39.542113   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:39.745769   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:39.746070   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:39.804509   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:40.042010   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:40.245489   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:40.245714   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:40.305906   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:40.974337   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:40.974891   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:40.975164   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:40.976172   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:41.043562   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:41.245266   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:41.246995   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:41.305043   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:41.555348   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:41.747069   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:41.747223   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:41.808560   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:42.042721   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:42.245464   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:42.249003   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:42.305995   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:42.542365   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:42.745971   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:42.747086   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:42.804969   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:43.043138   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:43.245540   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:43.245608   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:43.305541   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:43.542735   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:43.746245   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:43.746455   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:43.805117   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:44.042965   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:44.245948   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:44.246037   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:44.305908   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:44.542481   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:44.746945   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:44.747082   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:44.807255   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:45.043759   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:45.246298   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:45.248510   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:45.304836   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:45.542731   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:45.746017   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:45.746211   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:45.805939   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:46.042073   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:46.245755   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:46.246512   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:46.305340   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:46.553999   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:46.745582   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:46.745768   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:46.805827   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:47.042685   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:47.246090   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:47.247169   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:47.304595   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:47.544504   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:47.746233   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:47.746303   14017 kapi.go:107] duration metric: took 26.005874052s to wait for kubernetes.io/minikube-addons=registry ...
	I0722 10:30:47.806426   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:48.042731   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:48.244924   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:48.305394   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:48.656975   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:48.745084   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:48.806114   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:49.042355   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:49.245137   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:49.307983   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:49.544096   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:49.745342   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:49.805952   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:50.042250   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:50.244762   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:50.305676   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:50.554215   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:50.745100   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:50.806898   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:51.042405   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:51.246301   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:51.305951   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:51.543136   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:51.746678   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:51.805508   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:52.042260   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:52.245098   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:52.305549   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:52.542201   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:52.744827   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:52.805220   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:53.043170   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:53.245570   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:53.306606   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:53.544072   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:53.747296   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:53.806586   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:54.045486   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:54.244466   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:54.305178   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:54.548251   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:54.749307   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:54.806257   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:55.043393   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:55.245338   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:55.305105   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:55.542502   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:55.744165   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:55.806355   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:56.043014   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:56.245030   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:56.305660   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:56.544799   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:56.745402   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:56.806148   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:57.153686   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:57.244642   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:57.313837   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:57.542467   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:57.750949   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:57.806023   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:58.046735   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:58.246980   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:58.308882   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:58.542564   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:58.748134   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:58.806065   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:59.042995   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:59.245176   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:59.306347   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:59.543409   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:59.745202   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:59.805518   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:00.042449   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:00.244641   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:00.305582   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:00.543570   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:00.744474   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:00.805028   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:01.042339   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:01.245153   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:01.305577   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:01.542005   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:01.745983   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.237000   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:02.237625   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:02.245024   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.305549   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:02.546283   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:02.745545   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.809254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:03.044282   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:03.245062   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:03.305062   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:03.542900   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:03.744600   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:03.806064   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:04.042830   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:04.244754   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:04.305182   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:04.542698   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:04.745513   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:04.806480   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:05.042030   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:05.245136   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:05.306444   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:05.542491   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.086964   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.088051   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.089360   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:06.244124   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.308349   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:06.547978   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.744931   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.804921   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:07.042391   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:07.245026   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:07.304867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:07.543342   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:07.752711   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:07.804941   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:08.044196   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:08.244975   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:08.305273   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:08.543432   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:08.747215   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:08.806411   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:09.042431   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:09.246350   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:09.305600   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:09.543251   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:09.745040   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:09.808931   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:10.042111   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:10.244930   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:10.307112   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:10.547585   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.136092   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.137164   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.137260   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:11.246520   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.304487   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:11.542373   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.746274   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.811416   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:12.048882   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:12.244441   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:12.310901   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:12.543092   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:12.745226   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:12.806244   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:13.045020   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:13.245256   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:13.308730   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:13.542508   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:13.744736   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:13.805615   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:14.043529   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:14.245632   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:14.305867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:14.543053   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:14.745097   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:14.806136   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:15.042368   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:15.252649   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:15.311633   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:15.543434   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:15.744373   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:15.804542   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:16.042770   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:16.244665   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:16.306810   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:16.542211   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:16.744777   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:16.805232   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:17.042867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:17.244807   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:17.304997   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:17.543293   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:17.747942   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:17.805247   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:18.042692   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:18.244757   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:18.304906   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:18.542673   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:18.745183   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:18.806048   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:19.045827   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:19.247573   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:19.305023   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:19.543560   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:19.746348   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:19.810716   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:20.043419   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:20.243921   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:20.305230   14017 kapi.go:107] duration metric: took 57.505506674s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0722 10:31:20.542759   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:20.744679   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:21.042897   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:21.245130   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:21.542964   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:21.745295   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:22.042036   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:22.244675   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:22.542849   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:22.745213   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:23.043263   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:23.245095   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:23.542913   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:23.745006   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:24.042653   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:24.244095   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:24.542690   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:24.745071   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:25.043595   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:25.244318   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:25.542193   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:25.745792   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:26.042623   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:26.244030   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:26.543044   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:26.745099   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:27.042640   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:27.244840   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:27.542177   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:27.744865   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:28.042939   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:28.245305   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:28.543949   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:28.746147   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:29.043150   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:29.246028   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:29.542823   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:29.744670   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:30.042597   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:30.243915   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:30.542777   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:30.746458   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:31.042746   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:31.245490   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:31.543031   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:31.745016   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:32.042959   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:32.244530   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:32.542846   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:32.744493   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:33.042366   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:33.245653   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:33.924658   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:33.926897   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:34.042466   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:34.251612   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:34.548332   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:34.744688   14017 kapi.go:107] duration metric: took 1m13.004359376s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0722 10:31:35.054037   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:35.542697   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:36.045848   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:36.542417   14017 kapi.go:107] duration metric: took 1m11.503612529s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0722 10:31:36.544014   14017 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-362127 cluster.
	I0722 10:31:36.545191   14017 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0722 10:31:36.546441   14017 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0722 10:31:36.547884   14017 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0722 10:31:36.549263   14017 addons.go:510] duration metric: took 1m23.5456299s for enable addons: enabled=[storage-provisioner cloud-spanner helm-tiller nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0722 10:31:36.549298   14017 start.go:246] waiting for cluster config update ...
	I0722 10:31:36.549313   14017 start.go:255] writing updated cluster config ...
	I0722 10:31:36.549581   14017 ssh_runner.go:195] Run: rm -f paused
	I0722 10:31:36.599936   14017 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 10:31:36.601881   14017 out.go:177] * Done! kubectl is now configured to use "addons-362127" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.545912288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644474545884021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d800d5ab-1608-4a1a-a5e1-43334e41d757 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.546703147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e0e17ff-0a41-4c6c-b54d-42b16d9cf112 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.546777733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e0e17ff-0a41-4c6c-b54d-42b16d9cf112 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.547131317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0a763da12bc2c9c61d9a2837239d3d0895b5c4f90ada6ff6fc48fde05ec432,PodSandboxId:a572214c6665909cb436e8e3f095f389bda4c3175186dd8c7919aa0c8aff97a6,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721644267627718294,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-m7v29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aeb8fac-6aeb-4860-80c3-4bec211c87bf,},Annotations:map[string]string{io.kubernetes.container.hash: 58a220bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474bd9c4d656b8bd2259842e860e8e0c1f8c33d92d2623ea8e4a13ef1e494066,PodSandboxId:5e91548be756c27897d3e608a217220f2fac254443cf2fd9e747c95dfe8f6560,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721644266179564563,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rxrsj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 272d3974,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf
31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2
e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandboxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9
bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c
5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821c
cd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d
64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e0e17ff-0a41-4c6c-b54d-42b16d9cf112 name=/runtime.v1.RuntimeService/
ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.591819907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=897f6bdb-1fb5-4f4a-8901-39f5b3f6a525 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.591894867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=897f6bdb-1fb5-4f4a-8901-39f5b3f6a525 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.593079244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b70f0f47-4457-4ae7-85ef-90ce5698e823 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.594611717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644474594581604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b70f0f47-4457-4ae7-85ef-90ce5698e823 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.595420399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4a4f21f-e1e8-42e9-a78c-1101af7427f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.595496090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4a4f21f-e1e8-42e9-a78c-1101af7427f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.595817544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0a763da12bc2c9c61d9a2837239d3d0895b5c4f90ada6ff6fc48fde05ec432,PodSandboxId:a572214c6665909cb436e8e3f095f389bda4c3175186dd8c7919aa0c8aff97a6,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721644267627718294,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-m7v29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aeb8fac-6aeb-4860-80c3-4bec211c87bf,},Annotations:map[string]string{io.kubernetes.container.hash: 58a220bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474bd9c4d656b8bd2259842e860e8e0c1f8c33d92d2623ea8e4a13ef1e494066,PodSandboxId:5e91548be756c27897d3e608a217220f2fac254443cf2fd9e747c95dfe8f6560,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721644266179564563,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rxrsj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 272d3974,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf
31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2
e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandboxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9
bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c
5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821c
cd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d
64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4a4f21f-e1e8-42e9-a78c-1101af7427f8 name=/runtime.v1.RuntimeService/
ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.630946124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98444be6-009d-4510-aedf-c542ea19a10c name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.631021697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98444be6-009d-4510-aedf-c542ea19a10c name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.632220588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff20a186-a206-48fa-8fbc-10369222fb2f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.633888521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644474633854209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff20a186-a206-48fa-8fbc-10369222fb2f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.634503822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a646edb-038b-4a71-9d01-7d14fc96b9b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.634577174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a646edb-038b-4a71-9d01-7d14fc96b9b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.634935284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0a763da12bc2c9c61d9a2837239d3d0895b5c4f90ada6ff6fc48fde05ec432,PodSandboxId:a572214c6665909cb436e8e3f095f389bda4c3175186dd8c7919aa0c8aff97a6,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721644267627718294,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-m7v29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aeb8fac-6aeb-4860-80c3-4bec211c87bf,},Annotations:map[string]string{io.kubernetes.container.hash: 58a220bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474bd9c4d656b8bd2259842e860e8e0c1f8c33d92d2623ea8e4a13ef1e494066,PodSandboxId:5e91548be756c27897d3e608a217220f2fac254443cf2fd9e747c95dfe8f6560,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721644266179564563,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rxrsj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 272d3974,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf
31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2
e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandboxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9
bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c
5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821c
cd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d
64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a646edb-038b-4a71-9d01-7d14fc96b9b2 name=/runtime.v1.RuntimeService/
ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.674668270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2311235-9fea-435d-8919-f66b1d9314f5 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.674757911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2311235-9fea-435d-8919-f66b1d9314f5 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.676144043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f192467f-38b7-4ed5-ace4-11b38a30fa5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.677314786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644474677287885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f192467f-38b7-4ed5-ace4-11b38a30fa5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.678104676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81463a05-184f-4ac2-b042-b3f81506337d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.678160776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81463a05-184f-4ac2-b042-b3f81506337d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:34:34 addons-362127 crio[685]: time="2024-07-22 10:34:34.678635127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0a763da12bc2c9c61d9a2837239d3d0895b5c4f90ada6ff6fc48fde05ec432,PodSandboxId:a572214c6665909cb436e8e3f095f389bda4c3175186dd8c7919aa0c8aff97a6,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721644267627718294,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-m7v29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aeb8fac-6aeb-4860-80c3-4bec211c87bf,},Annotations:map[string]string{io.kubernetes.container.hash: 58a220bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474bd9c4d656b8bd2259842e860e8e0c1f8c33d92d2623ea8e4a13ef1e494066,PodSandboxId:5e91548be756c27897d3e608a217220f2fac254443cf2fd9e747c95dfe8f6560,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721644266179564563,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rxrsj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed,},Annotations:map[string]string{io.kubernetes.container.hash: 272d3974,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf
31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2
e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandboxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9
bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c
5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821c
cd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d
64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81463a05-184f-4ac2-b042-b3f81506337d name=/runtime.v1.RuntimeService/
ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	800f46b915ccc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   a4761b99cae6c       hello-world-app-6778b5fc9f-lj5kn
	73770316dba17       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   32307f7379d78       nginx
	0fcc9665ac147       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   ec78a497784b7       headlamp-7867546754-25xv5
	8d97614ce321b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   d7d452b7fdf1a       gcp-auth-5db96cd9b4-5s6sz
	5f0a763da12bc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   a572214c66659       ingress-nginx-admission-patch-m7v29
	474bd9c4d656b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   5e91548be756c       ingress-nginx-admission-create-rxrsj
	894be7fb7c0a8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   efc9f77cdcb60       yakd-dashboard-799879c74f-6h47n
	af665d7c09f29       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   854ef3e35e4e7       metrics-server-c59844bb4-c7dpf
	a023d34393226       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   6ba14bf1d6d23       storage-provisioner
	3009a540031d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   8b050e4257f8c       coredns-7db6d8ff4d-rdwgl
	1038681b91ded       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   027d60d52d84e       kube-proxy-w2bc4
	1cbab7bd85e1c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   a4b8b78d8e3a4       kube-controller-manager-addons-362127
	8095e98c3220c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   4c5d47da23c1c       kube-scheduler-addons-362127
	ca87cc163cf9b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   91c461356ca15       etcd-addons-362127
	9243bc8ee19c7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   f5aa39a9d64e9       kube-apiserver-addons-362127
	
	
	==> coredns [3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6] <==
	[INFO] 10.244.0.6:52006 - 20009 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119998s
	[INFO] 10.244.0.6:39421 - 60181 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163984s
	[INFO] 10.244.0.6:39421 - 14870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192082s
	[INFO] 10.244.0.6:48467 - 53622 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063444s
	[INFO] 10.244.0.6:48467 - 63600 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063352s
	[INFO] 10.244.0.6:36840 - 55911 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084433s
	[INFO] 10.244.0.6:36840 - 46693 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058623s
	[INFO] 10.244.0.6:35631 - 38791 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000127701s
	[INFO] 10.244.0.6:35631 - 26747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047442s
	[INFO] 10.244.0.6:35951 - 4450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051749s
	[INFO] 10.244.0.6:35951 - 15200 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023281s
	[INFO] 10.244.0.6:55827 - 30363 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048992s
	[INFO] 10.244.0.6:55827 - 31877 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039092s
	[INFO] 10.244.0.6:35091 - 42481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039164s
	[INFO] 10.244.0.6:35091 - 23283 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003734s
	[INFO] 10.244.0.22:36695 - 44770 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000237854s
	[INFO] 10.244.0.22:51732 - 13928 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000916257s
	[INFO] 10.244.0.22:53833 - 38195 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115203s
	[INFO] 10.244.0.22:36418 - 7820 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075271s
	[INFO] 10.244.0.22:40258 - 49446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126748s
	[INFO] 10.244.0.22:47003 - 64351 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078692s
	[INFO] 10.244.0.22:49460 - 40159 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000725386s
	[INFO] 10.244.0.22:53756 - 4990 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000411581s
	[INFO] 10.244.0.26:42406 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345393s
	[INFO] 10.244.0.26:43937 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178061s
	
	
	==> describe nodes <==
	Name:               addons-362127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-362127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=addons-362127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_29_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-362127
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:29:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-362127
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:32:43 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:32:43 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:32:43 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:32:43 +0000   Mon, 22 Jul 2024 10:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    addons-362127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cde07d4e07b438db452d7848feab09e
	  System UUID:                4cde07d4-e07b-438d-b452-d7848feab09e
	  Boot ID:                    1a54dee2-ee71-4081-88cc-549dd9770d8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-lj5kn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  gcp-auth                    gcp-auth-5db96cd9b4-5s6sz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  headlamp                    headlamp-7867546754-25xv5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 coredns-7db6d8ff4d-rdwgl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m22s
	  kube-system                 etcd-addons-362127                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-apiserver-addons-362127             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-addons-362127    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-w2bc4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-scheduler-addons-362127             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 metrics-server-c59844bb4-c7dpf           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  yakd-dashboard              yakd-dashboard-799879c74f-6h47n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m20s  kube-proxy       
	  Normal  Starting                 4m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m35s  kubelet          Node addons-362127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s  kubelet          Node addons-362127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s  kubelet          Node addons-362127 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m34s  kubelet          Node addons-362127 status is now: NodeReady
	  Normal  RegisteredNode           4m23s  node-controller  Node addons-362127 event: Registered Node addons-362127 in Controller
	
	
	==> dmesg <==
	[  +0.063696] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.481334] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.088524] kauditd_printk_skb: 69 callbacks suppressed
	[Jul22 10:30] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.572833] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +4.895309] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.003548] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.977644] kauditd_printk_skb: 100 callbacks suppressed
	[ +25.678501] kauditd_printk_skb: 30 callbacks suppressed
	[Jul22 10:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.075909] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.240652] kauditd_printk_skb: 75 callbacks suppressed
	[  +6.322502] kauditd_printk_skb: 34 callbacks suppressed
	[ +14.987538] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.639785] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.002030] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.078084] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.501698] kauditd_printk_skb: 33 callbacks suppressed
	[Jul22 10:32] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.442949] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.531902] kauditd_printk_skb: 3 callbacks suppressed
	[ +20.944679] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.263155] kauditd_printk_skb: 33 callbacks suppressed
	[Jul22 10:34] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.348455] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06] <==
	{"level":"info","ts":"2024-07-22T10:31:11.119593Z","caller":"traceutil/trace.go:171","msg":"trace[784481501] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"275.805133ms","start":"2024-07-22T10:31:10.84378Z","end":"2024-07-22T10:31:11.119585Z","steps":["trace[784481501] 'process raft request'  (duration: 275.311707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.905938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.45914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8826370110652983807 > lease_revoke:<id:7a7d90d9fd98a95a>","response":"size:28"}
	{"level":"info","ts":"2024-07-22T10:31:33.906085Z","caller":"traceutil/trace.go:171","msg":"trace[1605961939] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1169; }","duration":"379.98362ms","start":"2024-07-22T10:31:33.526089Z","end":"2024-07-22T10:31:33.906073Z","steps":["trace[1605961939] 'read index received'  (duration: 119.049694ms)","trace[1605961939] 'applied index is now lower than readState.Index'  (duration: 260.932927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T10:31:33.906425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.309456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-22T10:31:33.906491Z","caller":"traceutil/trace.go:171","msg":"trace[1542156137] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1135; }","duration":"380.414363ms","start":"2024-07-22T10:31:33.526066Z","end":"2024-07-22T10:31:33.90648Z","steps":["trace[1542156137] 'agreement among raft nodes before linearized reading'  (duration: 380.140913ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.906545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:33.526053Z","time spent":"380.478449ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-22T10:31:33.906637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.961107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"warn","ts":"2024-07-22T10:31:33.906499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.381021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-22T10:31:33.906802Z","caller":"traceutil/trace.go:171","msg":"trace[490097682] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1135; }","duration":"353.776233ms","start":"2024-07-22T10:31:33.553016Z","end":"2024-07-22T10:31:33.906792Z","steps":["trace[490097682] 'agreement among raft nodes before linearized reading'  (duration: 353.378648ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:31:33.906936Z","caller":"traceutil/trace.go:171","msg":"trace[1879019766] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1135; }","duration":"178.088655ms","start":"2024-07-22T10:31:33.72865Z","end":"2024-07-22T10:31:33.906739Z","steps":["trace[1879019766] 'agreement among raft nodes before linearized reading'  (duration: 177.870288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.906919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:33.553004Z","time spent":"353.901699ms","remote":"127.0.0.1:59522","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":7,"response size":30,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	{"level":"info","ts":"2024-07-22T10:31:45.157704Z","caller":"traceutil/trace.go:171","msg":"trace[675707751] linearizableReadLoop","detail":"{readStateIndex:1299; appliedIndex:1298; }","duration":"441.621887ms","start":"2024-07-22T10:31:44.716063Z","end":"2024-07-22T10:31:45.157685Z","steps":["trace[675707751] 'read index received'  (duration: 441.442929ms)","trace[675707751] 'applied index is now lower than readState.Index'  (duration: 178.404µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T10:31:45.157835Z","caller":"traceutil/trace.go:171","msg":"trace[1180722934] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"522.08685ms","start":"2024-07-22T10:31:44.635741Z","end":"2024-07-22T10:31:45.157827Z","steps":["trace[1180722934] 'process raft request'  (duration: 521.77533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.157942Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:44.635728Z","time spent":"522.128852ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4319,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" mod_revision:1250 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" value_size:4248 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" > >"}
	{"level":"warn","ts":"2024-07-22T10:31:45.15797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.339722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-22T10:31:45.158019Z","caller":"traceutil/trace.go:171","msg":"trace[1916217483] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1261; }","duration":"202.414729ms","start":"2024-07-22T10:31:44.955596Z","end":"2024-07-22T10:31:45.15801Z","steps":["trace[1916217483] 'agreement among raft nodes before linearized reading'  (duration: 202.296487ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.158147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"442.083043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a\" ","response":"range_response_count:1 size:4206"}
	{"level":"info","ts":"2024-07-22T10:31:45.158163Z","caller":"traceutil/trace.go:171","msg":"trace[1176231384] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a; range_end:; response_count:1; response_revision:1261; }","duration":"442.117869ms","start":"2024-07-22T10:31:44.716039Z","end":"2024-07-22T10:31:45.158157Z","steps":["trace[1176231384] 'agreement among raft nodes before linearized reading'  (duration: 442.066797ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.158183Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:44.716027Z","time spent":"442.150547ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4229,"request content":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a\" "}
	{"level":"info","ts":"2024-07-22T10:32:20.387611Z","caller":"traceutil/trace.go:171","msg":"trace[1904002428] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"144.736081ms","start":"2024-07-22T10:32:20.242852Z","end":"2024-07-22T10:32:20.387588Z","steps":["trace[1904002428] 'process raft request'  (duration: 144.644087ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:32:21.069695Z","caller":"traceutil/trace.go:171","msg":"trace[1024143635] transaction","detail":"{read_only:false; response_revision:1531; number_of_response:1; }","duration":"314.184445ms","start":"2024-07-22T10:32:20.755493Z","end":"2024-07-22T10:32:21.069678Z","steps":["trace[1024143635] 'process raft request'  (duration: 314.091831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:32:21.069865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:32:20.755469Z","time spent":"314.285965ms","remote":"127.0.0.1:59262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1507 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-07-22T10:32:21.070207Z","caller":"traceutil/trace.go:171","msg":"trace[222923741] linearizableReadLoop","detail":"{readStateIndex:1581; appliedIndex:1581; }","duration":"236.359642ms","start":"2024-07-22T10:32:20.833837Z","end":"2024-07-22T10:32:21.070196Z","steps":["trace[222923741] 'read index received'  (duration: 235.670839ms)","trace[222923741] 'applied index is now lower than readState.Index'  (duration: 686.847µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T10:32:21.070373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.527529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-22T10:32:21.070411Z","caller":"traceutil/trace.go:171","msg":"trace[2121093447] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1531; }","duration":"236.592943ms","start":"2024-07-22T10:32:20.833811Z","end":"2024-07-22T10:32:21.070404Z","steps":["trace[2121093447] 'agreement among raft nodes before linearized reading'  (duration: 236.440252ms)"],"step_count":1}
	
	
	==> gcp-auth [8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29] <==
	2024/07/22 10:31:36 GCP Auth Webhook started!
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:41 Ready to marshal response ...
	2024/07/22 10:31:41 Ready to write response ...
	2024/07/22 10:31:43 Ready to marshal response ...
	2024/07/22 10:31:43 Ready to write response ...
	2024/07/22 10:31:43 Ready to marshal response ...
	2024/07/22 10:31:43 Ready to write response ...
	2024/07/22 10:31:47 Ready to marshal response ...
	2024/07/22 10:31:47 Ready to write response ...
	2024/07/22 10:31:53 Ready to marshal response ...
	2024/07/22 10:31:53 Ready to write response ...
	2024/07/22 10:32:05 Ready to marshal response ...
	2024/07/22 10:32:05 Ready to write response ...
	2024/07/22 10:32:15 Ready to marshal response ...
	2024/07/22 10:32:15 Ready to write response ...
	2024/07/22 10:32:43 Ready to marshal response ...
	2024/07/22 10:32:43 Ready to write response ...
	2024/07/22 10:34:24 Ready to marshal response ...
	2024/07/22 10:34:24 Ready to write response ...
	
	
	==> kernel <==
	 10:34:35 up 5 min,  0 users,  load average: 0.81, 1.41, 0.74
	Linux addons-362127 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e] <==
	E0722 10:31:59.290880       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0722 10:31:59.291733       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.305921       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.313270       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.334276       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	I0722 10:31:59.572361       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0722 10:31:59.851941       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0722 10:32:00.881473       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0722 10:32:05.379779       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0722 10:32:05.548020       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.23.84"}
	E0722 10:32:09.204952       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0722 10:32:28.896912       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0722 10:32:59.091404       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.091475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.121087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.121133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.144661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.144714       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.187436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.187478       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0722 10:33:00.125627       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0722 10:33:00.188407       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0722 10:33:00.214289       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0722 10:34:24.407620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.218.100"}
	
	
	==> kube-controller-manager [1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da] <==
	E0722 10:33:22.434677       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:33:37.287214       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:33:37.287363       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:33:38.836027       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:33:38.836124       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:33:40.570379       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:33:40.570476       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:33:54.511569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:33:54.511668       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:34:18.005110       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:34:18.005262       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:34:19.767451       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:34:19.767553       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0722 10:34:24.281691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="61.070109ms"
	I0722 10:34:24.300479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="18.322851ms"
	I0722 10:34:24.303302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="103.694µs"
	I0722 10:34:26.105176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.600612ms"
	I0722 10:34:26.105247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="35.921µs"
	W0722 10:34:26.545612       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:34:26.545648       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0722 10:34:26.704893       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0722 10:34:26.709673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="11.699µs"
	I0722 10:34:26.718012       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0722 10:34:30.786282       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:34:30.786524       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4] <==
	I0722 10:30:14.567233       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:30:14.580957       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0722 10:30:14.768579       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:30:14.768620       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:30:14.768634       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:30:14.771299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:30:14.771535       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:30:14.771547       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:30:14.772861       1 config.go:192] "Starting service config controller"
	I0722 10:30:14.772874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:30:14.772907       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:30:14.772910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:30:14.781038       1 config.go:319] "Starting node config controller"
	I0722 10:30:14.781048       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:30:14.873539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:30:14.874131       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:30:14.881410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382] <==
	E0722 10:29:56.356792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 10:29:56.356775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 10:29:56.356892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:29:56.356939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:29:56.356950       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:29:56.356957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:29:56.356288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:56.356998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:29:56.357040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:29:56.357074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:29:57.159886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:29:57.159993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:29:57.188163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:29:57.188244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:29:57.220712       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:29:57.220739       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:29:57.246454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:29:57.246698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:29:57.496497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:57.496589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:29:57.517799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:29:57.517945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:29:57.567863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:57.568003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0722 10:29:59.547802       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 10:34:24 addons-362127 kubelet[1279]: I0722 10:34:24.260484    1279 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc97fa01-6616-4254-93df-9873804b1648" containerName="csi-snapshotter"
	Jul 22 10:34:24 addons-362127 kubelet[1279]: I0722 10:34:24.385892    1279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78j9\" (UniqueName: \"kubernetes.io/projected/43b1d5c8-b098-4afc-b72d-a5e7c55e8230-kube-api-access-s78j9\") pod \"hello-world-app-6778b5fc9f-lj5kn\" (UID: \"43b1d5c8-b098-4afc-b72d-a5e7c55e8230\") " pod="default/hello-world-app-6778b5fc9f-lj5kn"
	Jul 22 10:34:24 addons-362127 kubelet[1279]: I0722 10:34:24.385977    1279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/43b1d5c8-b098-4afc-b72d-a5e7c55e8230-gcp-creds\") pod \"hello-world-app-6778b5fc9f-lj5kn\" (UID: \"43b1d5c8-b098-4afc-b72d-a5e7c55e8230\") " pod="default/hello-world-app-6778b5fc9f-lj5kn"
	Jul 22 10:34:25 addons-362127 kubelet[1279]: I0722 10:34:25.493454    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rpqn\" (UniqueName: \"kubernetes.io/projected/f2028cf5-46d0-41bc-b6b8-bc8e75607ab4-kube-api-access-4rpqn\") pod \"f2028cf5-46d0-41bc-b6b8-bc8e75607ab4\" (UID: \"f2028cf5-46d0-41bc-b6b8-bc8e75607ab4\") "
	Jul 22 10:34:25 addons-362127 kubelet[1279]: I0722 10:34:25.497039    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2028cf5-46d0-41bc-b6b8-bc8e75607ab4-kube-api-access-4rpqn" (OuterVolumeSpecName: "kube-api-access-4rpqn") pod "f2028cf5-46d0-41bc-b6b8-bc8e75607ab4" (UID: "f2028cf5-46d0-41bc-b6b8-bc8e75607ab4"). InnerVolumeSpecName "kube-api-access-4rpqn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 10:34:25 addons-362127 kubelet[1279]: I0722 10:34:25.594639    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4rpqn\" (UniqueName: \"kubernetes.io/projected/f2028cf5-46d0-41bc-b6b8-bc8e75607ab4-kube-api-access-4rpqn\") on node \"addons-362127\" DevicePath \"\""
	Jul 22 10:34:26 addons-362127 kubelet[1279]: I0722 10:34:26.080689    1279 scope.go:117] "RemoveContainer" containerID="411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7"
	Jul 22 10:34:26 addons-362127 kubelet[1279]: I0722 10:34:26.118857    1279 scope.go:117] "RemoveContainer" containerID="411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7"
	Jul 22 10:34:26 addons-362127 kubelet[1279]: E0722 10:34:26.119584    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7\": container with ID starting with 411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7 not found: ID does not exist" containerID="411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7"
	Jul 22 10:34:26 addons-362127 kubelet[1279]: I0722 10:34:26.119631    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7"} err="failed to get container status \"411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7\": rpc error: code = NotFound desc = could not find container \"411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7\": container with ID starting with 411312e688f0985af8bfa87600a770913853aea354094abaa136557c356936c7 not found: ID does not exist"
	Jul 22 10:34:26 addons-362127 kubelet[1279]: I0722 10:34:26.129044    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-lj5kn" podStartSLOduration=1.4866868420000001 podStartE2EDuration="2.129015792s" podCreationTimestamp="2024-07-22 10:34:24 +0000 UTC" firstStartedPulling="2024-07-22 10:34:24.821450306 +0000 UTC m=+265.893903739" lastFinishedPulling="2024-07-22 10:34:25.463779255 +0000 UTC m=+266.536232689" observedRunningTime="2024-07-22 10:34:26.095603841 +0000 UTC m=+267.168057291" watchObservedRunningTime="2024-07-22 10:34:26.129015792 +0000 UTC m=+267.201469243"
	Jul 22 10:34:27 addons-362127 kubelet[1279]: I0722 10:34:27.051239    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aeb8fac-6aeb-4860-80c3-4bec211c87bf" path="/var/lib/kubelet/pods/5aeb8fac-6aeb-4860-80c3-4bec211c87bf/volumes"
	Jul 22 10:34:27 addons-362127 kubelet[1279]: I0722 10:34:27.051726    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed" path="/var/lib/kubelet/pods/7c1b7d9a-6e5b-4cf9-9f52-319a7d79a2ed/volumes"
	Jul 22 10:34:27 addons-362127 kubelet[1279]: I0722 10:34:27.052073    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2028cf5-46d0-41bc-b6b8-bc8e75607ab4" path="/var/lib/kubelet/pods/f2028cf5-46d0-41bc-b6b8-bc8e75607ab4/volumes"
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.024084    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9hgt\" (UniqueName: \"kubernetes.io/projected/e6170647-7d96-43d7-8305-0a39a956c237-kube-api-access-f9hgt\") pod \"e6170647-7d96-43d7-8305-0a39a956c237\" (UID: \"e6170647-7d96-43d7-8305-0a39a956c237\") "
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.024150    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6170647-7d96-43d7-8305-0a39a956c237-webhook-cert\") pod \"e6170647-7d96-43d7-8305-0a39a956c237\" (UID: \"e6170647-7d96-43d7-8305-0a39a956c237\") "
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.030604    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6170647-7d96-43d7-8305-0a39a956c237-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e6170647-7d96-43d7-8305-0a39a956c237" (UID: "e6170647-7d96-43d7-8305-0a39a956c237"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.030753    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6170647-7d96-43d7-8305-0a39a956c237-kube-api-access-f9hgt" (OuterVolumeSpecName: "kube-api-access-f9hgt") pod "e6170647-7d96-43d7-8305-0a39a956c237" (UID: "e6170647-7d96-43d7-8305-0a39a956c237"). InnerVolumeSpecName "kube-api-access-f9hgt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.102145    1279 scope.go:117] "RemoveContainer" containerID="c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243"
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.124803    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f9hgt\" (UniqueName: \"kubernetes.io/projected/e6170647-7d96-43d7-8305-0a39a956c237-kube-api-access-f9hgt\") on node \"addons-362127\" DevicePath \"\""
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.124823    1279 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6170647-7d96-43d7-8305-0a39a956c237-webhook-cert\") on node \"addons-362127\" DevicePath \"\""
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.128702    1279 scope.go:117] "RemoveContainer" containerID="c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243"
	Jul 22 10:34:30 addons-362127 kubelet[1279]: E0722 10:34:30.129250    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243\": container with ID starting with c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243 not found: ID does not exist" containerID="c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243"
	Jul 22 10:34:30 addons-362127 kubelet[1279]: I0722 10:34:30.129277    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243"} err="failed to get container status \"c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243\": rpc error: code = NotFound desc = could not find container \"c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243\": container with ID starting with c4ede864d8660701eab28386fbdf1983e7f0c3951ee396e44cf45f4752b1e243 not found: ID does not exist"
	Jul 22 10:34:31 addons-362127 kubelet[1279]: I0722 10:34:31.049678    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6170647-7d96-43d7-8305-0a39a956c237" path="/var/lib/kubelet/pods/e6170647-7d96-43d7-8305-0a39a956c237/volumes"
	
	
	==> storage-provisioner [a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336] <==
	I0722 10:30:20.431247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:30:20.504713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:30:20.504770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:30:20.533459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:30:20.533663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2!
	I0722 10:30:20.538410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af7272b2-74b5-4117-9eb8-d62733289c47", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2 became leader
	I0722 10:30:20.642795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-362127 -n addons-362127
helpers_test.go:261: (dbg) Run:  kubectl --context addons-362127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (334.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.011778ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
helpers_test.go:344: "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003949755s
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (64.109254ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-362127, age: 2m3.637605442s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (61.456271ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-362127, age: 2m5.585363745s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (64.401273ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-362127, age: 2m11.481802125s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (62.811631ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 2m1.196248788s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (69.036532ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 2m9.729361432s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (63.880474ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 2m30.005027606s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (66.443369ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 2m46.874472661s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (66.411549ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 3m13.395671514s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (62.531776ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 4m15.797868403s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (60.427767ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 5m31.033113982s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (69.650955ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 6m37.202554038s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-362127 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-362127 top pods -n kube-system: exit status 1 (60.804812ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rdwgl, age: 7m16.850743492s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-362127 -n addons-362127
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 logs -n 25: (1.331656944s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-196061                                                                     | download-only-196061 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-451721                                                                     | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-832339                                                                     | download-only-832339 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-196061                                                                     | download-only-196061 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-224708 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | binary-mirror-224708                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42063                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-224708                                                                     | binary-mirror-224708 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-362127 --wait=true                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | -p addons-362127                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | -p addons-362127                                                                            |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-362127 ip                                                                            | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-362127 ssh cat                                                                       | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:31 UTC |
	|         | /opt/local-path-provisioner/pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:31 UTC | 22 Jul 24 10:32 UTC |
	|         | addons-362127                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-362127 ssh curl -s                                                                   | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-362127 addons                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC | 22 Jul 24 10:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-362127 addons                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:32 UTC | 22 Jul 24 10:32 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-362127 ip                                                                            | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-362127 addons disable                                                                | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:34 UTC | 22 Jul 24 10:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-362127 addons                                                                        | addons-362127        | jenkins | v1.33.1 | 22 Jul 24 10:37 UTC | 22 Jul 24 10:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:29:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:29:19.589001   14017 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:29:19.589248   14017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:19.589258   14017 out.go:304] Setting ErrFile to fd 2...
	I0722 10:29:19.589262   14017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:19.589451   14017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:29:19.590019   14017 out.go:298] Setting JSON to false
	I0722 10:29:19.590810   14017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":712,"bootTime":1721643448,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:29:19.590875   14017 start.go:139] virtualization: kvm guest
	I0722 10:29:19.592705   14017 out.go:177] * [addons-362127] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:29:19.593814   14017 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:29:19.593808   14017 notify.go:220] Checking for updates...
	I0722 10:29:19.596165   14017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:29:19.597386   14017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:29:19.598534   14017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:19.599512   14017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:29:19.600526   14017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:29:19.601749   14017 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:29:19.632490   14017 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 10:29:19.633636   14017 start.go:297] selected driver: kvm2
	I0722 10:29:19.633659   14017 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:29:19.633672   14017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:29:19.634320   14017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:19.634391   14017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:29:19.648637   14017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:29:19.648680   14017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:29:19.648931   14017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:29:19.648997   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:19.649013   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:19.649026   14017 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 10:29:19.649087   14017 start.go:340] cluster config:
	{Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:19.649216   14017 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:19.650908   14017 out.go:177] * Starting "addons-362127" primary control-plane node in "addons-362127" cluster
	I0722 10:29:19.652097   14017 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:19.652136   14017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:19.652146   14017 cache.go:56] Caching tarball of preloaded images
	I0722 10:29:19.652238   14017 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:29:19.652251   14017 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:29:19.652579   14017 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json ...
	I0722 10:29:19.652607   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json: {Name:mkc892ee9b8d8fe87cfad510947acbb2a73e77b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:19.652757   14017 start.go:360] acquireMachinesLock for addons-362127: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:29:19.652835   14017 start.go:364] duration metric: took 62.749µs to acquireMachinesLock for "addons-362127"
	I0722 10:29:19.652859   14017 start.go:93] Provisioning new machine with config: &{Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:29:19.652940   14017 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 10:29:19.654399   14017 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0722 10:29:19.654528   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:29:19.654569   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:29:19.668054   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0722 10:29:19.668473   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:29:19.668987   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:29:19.669009   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:29:19.669274   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:29:19.669437   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:19.669575   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:19.669720   14017 start.go:159] libmachine.API.Create for "addons-362127" (driver="kvm2")
	I0722 10:29:19.669743   14017 client.go:168] LocalClient.Create starting
	I0722 10:29:19.669771   14017 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:29:20.171755   14017 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:29:20.254166   14017 main.go:141] libmachine: Running pre-create checks...
	I0722 10:29:20.254185   14017 main.go:141] libmachine: (addons-362127) Calling .PreCreateCheck
	I0722 10:29:20.254643   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:20.255041   14017 main.go:141] libmachine: Creating machine...
	I0722 10:29:20.255054   14017 main.go:141] libmachine: (addons-362127) Calling .Create
	I0722 10:29:20.255210   14017 main.go:141] libmachine: (addons-362127) Creating KVM machine...
	I0722 10:29:20.256548   14017 main.go:141] libmachine: (addons-362127) DBG | found existing default KVM network
	I0722 10:29:20.257226   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.257104   14039 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0722 10:29:20.257269   14017 main.go:141] libmachine: (addons-362127) DBG | created network xml: 
	I0722 10:29:20.257289   14017 main.go:141] libmachine: (addons-362127) DBG | <network>
	I0722 10:29:20.257300   14017 main.go:141] libmachine: (addons-362127) DBG |   <name>mk-addons-362127</name>
	I0722 10:29:20.257311   14017 main.go:141] libmachine: (addons-362127) DBG |   <dns enable='no'/>
	I0722 10:29:20.257321   14017 main.go:141] libmachine: (addons-362127) DBG |   
	I0722 10:29:20.257331   14017 main.go:141] libmachine: (addons-362127) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 10:29:20.257342   14017 main.go:141] libmachine: (addons-362127) DBG |     <dhcp>
	I0722 10:29:20.257352   14017 main.go:141] libmachine: (addons-362127) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 10:29:20.257364   14017 main.go:141] libmachine: (addons-362127) DBG |     </dhcp>
	I0722 10:29:20.257376   14017 main.go:141] libmachine: (addons-362127) DBG |   </ip>
	I0722 10:29:20.257387   14017 main.go:141] libmachine: (addons-362127) DBG |   
	I0722 10:29:20.257395   14017 main.go:141] libmachine: (addons-362127) DBG | </network>
	I0722 10:29:20.257408   14017 main.go:141] libmachine: (addons-362127) DBG | 
	I0722 10:29:20.262685   14017 main.go:141] libmachine: (addons-362127) DBG | trying to create private KVM network mk-addons-362127 192.168.39.0/24...
	I0722 10:29:20.326271   14017 main.go:141] libmachine: (addons-362127) DBG | private KVM network mk-addons-362127 192.168.39.0/24 created
	I0722 10:29:20.326300   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.326251   14039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:20.326326   14017 main.go:141] libmachine: (addons-362127) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 ...
	I0722 10:29:20.326343   14017 main.go:141] libmachine: (addons-362127) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:29:20.326429   14017 main.go:141] libmachine: (addons-362127) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:29:20.561832   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.561691   14039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa...
	I0722 10:29:20.676096   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.676002   14039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/addons-362127.rawdisk...
	I0722 10:29:20.676124   14017 main.go:141] libmachine: (addons-362127) DBG | Writing magic tar header
	I0722 10:29:20.676145   14017 main.go:141] libmachine: (addons-362127) DBG | Writing SSH key tar header
	I0722 10:29:20.676204   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:20.676137   14039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 ...
	I0722 10:29:20.676276   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127
	I0722 10:29:20.676297   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:29:20.676310   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127 (perms=drwx------)
	I0722 10:29:20.676327   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:29:20.676333   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:29:20.676345   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:29:20.676351   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:20.676364   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:29:20.676374   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:29:20.676406   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:29:20.676427   14017 main.go:141] libmachine: (addons-362127) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:29:20.676436   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:29:20.676454   14017 main.go:141] libmachine: (addons-362127) DBG | Checking permissions on dir: /home
	I0722 10:29:20.676467   14017 main.go:141] libmachine: (addons-362127) Creating domain...
	I0722 10:29:20.676476   14017 main.go:141] libmachine: (addons-362127) DBG | Skipping /home - not owner
	I0722 10:29:20.677387   14017 main.go:141] libmachine: (addons-362127) define libvirt domain using xml: 
	I0722 10:29:20.677407   14017 main.go:141] libmachine: (addons-362127) <domain type='kvm'>
	I0722 10:29:20.677417   14017 main.go:141] libmachine: (addons-362127)   <name>addons-362127</name>
	I0722 10:29:20.677424   14017 main.go:141] libmachine: (addons-362127)   <memory unit='MiB'>4000</memory>
	I0722 10:29:20.677432   14017 main.go:141] libmachine: (addons-362127)   <vcpu>2</vcpu>
	I0722 10:29:20.677439   14017 main.go:141] libmachine: (addons-362127)   <features>
	I0722 10:29:20.677448   14017 main.go:141] libmachine: (addons-362127)     <acpi/>
	I0722 10:29:20.677458   14017 main.go:141] libmachine: (addons-362127)     <apic/>
	I0722 10:29:20.677467   14017 main.go:141] libmachine: (addons-362127)     <pae/>
	I0722 10:29:20.677476   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.677484   14017 main.go:141] libmachine: (addons-362127)   </features>
	I0722 10:29:20.677497   14017 main.go:141] libmachine: (addons-362127)   <cpu mode='host-passthrough'>
	I0722 10:29:20.677508   14017 main.go:141] libmachine: (addons-362127)   
	I0722 10:29:20.677527   14017 main.go:141] libmachine: (addons-362127)   </cpu>
	I0722 10:29:20.677538   14017 main.go:141] libmachine: (addons-362127)   <os>
	I0722 10:29:20.677544   14017 main.go:141] libmachine: (addons-362127)     <type>hvm</type>
	I0722 10:29:20.677553   14017 main.go:141] libmachine: (addons-362127)     <boot dev='cdrom'/>
	I0722 10:29:20.677564   14017 main.go:141] libmachine: (addons-362127)     <boot dev='hd'/>
	I0722 10:29:20.677577   14017 main.go:141] libmachine: (addons-362127)     <bootmenu enable='no'/>
	I0722 10:29:20.677592   14017 main.go:141] libmachine: (addons-362127)   </os>
	I0722 10:29:20.677627   14017 main.go:141] libmachine: (addons-362127)   <devices>
	I0722 10:29:20.677659   14017 main.go:141] libmachine: (addons-362127)     <disk type='file' device='cdrom'>
	I0722 10:29:20.677682   14017 main.go:141] libmachine: (addons-362127)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/boot2docker.iso'/>
	I0722 10:29:20.677696   14017 main.go:141] libmachine: (addons-362127)       <target dev='hdc' bus='scsi'/>
	I0722 10:29:20.677721   14017 main.go:141] libmachine: (addons-362127)       <readonly/>
	I0722 10:29:20.677740   14017 main.go:141] libmachine: (addons-362127)     </disk>
	I0722 10:29:20.677757   14017 main.go:141] libmachine: (addons-362127)     <disk type='file' device='disk'>
	I0722 10:29:20.677771   14017 main.go:141] libmachine: (addons-362127)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:29:20.677789   14017 main.go:141] libmachine: (addons-362127)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/addons-362127.rawdisk'/>
	I0722 10:29:20.677804   14017 main.go:141] libmachine: (addons-362127)       <target dev='hda' bus='virtio'/>
	I0722 10:29:20.677818   14017 main.go:141] libmachine: (addons-362127)     </disk>
	I0722 10:29:20.677840   14017 main.go:141] libmachine: (addons-362127)     <interface type='network'>
	I0722 10:29:20.677861   14017 main.go:141] libmachine: (addons-362127)       <source network='mk-addons-362127'/>
	I0722 10:29:20.677875   14017 main.go:141] libmachine: (addons-362127)       <model type='virtio'/>
	I0722 10:29:20.677886   14017 main.go:141] libmachine: (addons-362127)     </interface>
	I0722 10:29:20.677902   14017 main.go:141] libmachine: (addons-362127)     <interface type='network'>
	I0722 10:29:20.677916   14017 main.go:141] libmachine: (addons-362127)       <source network='default'/>
	I0722 10:29:20.677944   14017 main.go:141] libmachine: (addons-362127)       <model type='virtio'/>
	I0722 10:29:20.677966   14017 main.go:141] libmachine: (addons-362127)     </interface>
	I0722 10:29:20.677979   14017 main.go:141] libmachine: (addons-362127)     <serial type='pty'>
	I0722 10:29:20.677992   14017 main.go:141] libmachine: (addons-362127)       <target port='0'/>
	I0722 10:29:20.678004   14017 main.go:141] libmachine: (addons-362127)     </serial>
	I0722 10:29:20.678014   14017 main.go:141] libmachine: (addons-362127)     <console type='pty'>
	I0722 10:29:20.678045   14017 main.go:141] libmachine: (addons-362127)       <target type='serial' port='0'/>
	I0722 10:29:20.678060   14017 main.go:141] libmachine: (addons-362127)     </console>
	I0722 10:29:20.678072   14017 main.go:141] libmachine: (addons-362127)     <rng model='virtio'>
	I0722 10:29:20.678083   14017 main.go:141] libmachine: (addons-362127)       <backend model='random'>/dev/random</backend>
	I0722 10:29:20.678094   14017 main.go:141] libmachine: (addons-362127)     </rng>
	I0722 10:29:20.678101   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.678111   14017 main.go:141] libmachine: (addons-362127)     
	I0722 10:29:20.678118   14017 main.go:141] libmachine: (addons-362127)   </devices>
	I0722 10:29:20.678139   14017 main.go:141] libmachine: (addons-362127) </domain>
	I0722 10:29:20.678155   14017 main.go:141] libmachine: (addons-362127) 
	I0722 10:29:20.683444   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:6d:18:a7 in network default
	I0722 10:29:20.683944   14017 main.go:141] libmachine: (addons-362127) Ensuring networks are active...
	I0722 10:29:20.683971   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:20.684568   14017 main.go:141] libmachine: (addons-362127) Ensuring network default is active
	I0722 10:29:20.684882   14017 main.go:141] libmachine: (addons-362127) Ensuring network mk-addons-362127 is active
	I0722 10:29:20.685373   14017 main.go:141] libmachine: (addons-362127) Getting domain xml...
	I0722 10:29:20.685992   14017 main.go:141] libmachine: (addons-362127) Creating domain...
	I0722 10:29:22.048532   14017 main.go:141] libmachine: (addons-362127) Waiting to get IP...
	I0722 10:29:22.049438   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.049871   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.049920   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.049865   14039 retry.go:31] will retry after 296.885308ms: waiting for machine to come up
	I0722 10:29:22.348410   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.348764   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.348802   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.348738   14039 retry.go:31] will retry after 341.960078ms: waiting for machine to come up
	I0722 10:29:22.692189   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:22.692703   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:22.692729   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:22.692652   14039 retry.go:31] will retry after 480.197578ms: waiting for machine to come up
	I0722 10:29:23.174095   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:23.174562   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:23.174589   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:23.174507   14039 retry.go:31] will retry after 471.102584ms: waiting for machine to come up
	I0722 10:29:23.646990   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:23.647460   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:23.647492   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:23.647417   14039 retry.go:31] will retry after 673.342516ms: waiting for machine to come up
	I0722 10:29:24.322298   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:24.322654   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:24.322673   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:24.322629   14039 retry.go:31] will retry after 625.787153ms: waiting for machine to come up
	I0722 10:29:24.949957   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:24.950287   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:24.950312   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:24.950238   14039 retry.go:31] will retry after 827.528686ms: waiting for machine to come up
	I0722 10:29:25.778949   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:25.779309   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:25.779329   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:25.779274   14039 retry.go:31] will retry after 1.408983061s: waiting for machine to come up
	I0722 10:29:27.189800   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:27.190195   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:27.190223   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:27.190147   14039 retry.go:31] will retry after 1.767432679s: waiting for machine to come up
	I0722 10:29:28.960519   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:28.960927   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:28.960956   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:28.960876   14039 retry.go:31] will retry after 2.263225443s: waiting for machine to come up
	I0722 10:29:31.225552   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:31.225965   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:31.225990   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:31.225929   14039 retry.go:31] will retry after 2.324899366s: waiting for machine to come up
	I0722 10:29:33.553341   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:33.553655   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:33.553679   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:33.553622   14039 retry.go:31] will retry after 3.136063412s: waiting for machine to come up
	I0722 10:29:36.692416   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:36.692887   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find current IP address of domain addons-362127 in network mk-addons-362127
	I0722 10:29:36.692914   14017 main.go:141] libmachine: (addons-362127) DBG | I0722 10:29:36.692823   14039 retry.go:31] will retry after 4.388122313s: waiting for machine to come up
	I0722 10:29:41.082901   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.083364   14017 main.go:141] libmachine: (addons-362127) Found IP for machine: 192.168.39.147
	I0722 10:29:41.083385   14017 main.go:141] libmachine: (addons-362127) Reserving static IP address...
	I0722 10:29:41.083398   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has current primary IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.083760   14017 main.go:141] libmachine: (addons-362127) DBG | unable to find host DHCP lease matching {name: "addons-362127", mac: "52:54:00:5d:13:55", ip: "192.168.39.147"} in network mk-addons-362127
	I0722 10:29:41.150169   14017 main.go:141] libmachine: (addons-362127) DBG | Getting to WaitForSSH function...
	I0722 10:29:41.150199   14017 main.go:141] libmachine: (addons-362127) Reserved static IP address: 192.168.39.147
	I0722 10:29:41.150213   14017 main.go:141] libmachine: (addons-362127) Waiting for SSH to be available...
	I0722 10:29:41.152466   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.152865   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.152893   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.153026   14017 main.go:141] libmachine: (addons-362127) DBG | Using SSH client type: external
	I0722 10:29:41.153051   14017 main.go:141] libmachine: (addons-362127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa (-rw-------)
	I0722 10:29:41.153083   14017 main.go:141] libmachine: (addons-362127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:29:41.153094   14017 main.go:141] libmachine: (addons-362127) DBG | About to run SSH command:
	I0722 10:29:41.153138   14017 main.go:141] libmachine: (addons-362127) DBG | exit 0
	I0722 10:29:41.279873   14017 main.go:141] libmachine: (addons-362127) DBG | SSH cmd err, output: <nil>: 
	I0722 10:29:41.280097   14017 main.go:141] libmachine: (addons-362127) KVM machine creation complete!
	I0722 10:29:41.280367   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:41.280893   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:41.281078   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:41.281214   14017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:29:41.281230   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:29:41.282290   14017 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:29:41.282300   14017 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:29:41.282306   14017 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:29:41.282311   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.284712   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.285071   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.285094   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.285229   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.285384   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.285516   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.285642   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.285834   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.286113   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.286127   14017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:29:41.379473   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:29:41.379498   14017 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:29:41.379509   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.382453   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.382816   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.382841   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.383021   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.383222   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.383386   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.383540   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.383697   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.383869   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.383880   14017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:29:41.480602   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:29:41.480672   14017 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:29:41.480685   14017 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:29:41.480699   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.480945   14017 buildroot.go:166] provisioning hostname "addons-362127"
	I0722 10:29:41.480973   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.481174   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.483646   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.483928   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.483949   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.484095   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.484281   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.484455   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.484595   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.484765   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.484923   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.484936   14017 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-362127 && echo "addons-362127" | sudo tee /etc/hostname
	I0722 10:29:41.594611   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-362127
	
	I0722 10:29:41.594633   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.596974   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.597258   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.597307   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.597427   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.597622   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.597756   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.597961   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.598082   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:41.598254   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:41.598276   14017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-362127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-362127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-362127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:29:41.700715   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:29:41.700741   14017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:29:41.700791   14017 buildroot.go:174] setting up certificates
	I0722 10:29:41.700804   14017 provision.go:84] configureAuth start
	I0722 10:29:41.700822   14017 main.go:141] libmachine: (addons-362127) Calling .GetMachineName
	I0722 10:29:41.701089   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:41.703475   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.703794   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.703821   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.703967   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.706006   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.706317   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.706337   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.706506   14017 provision.go:143] copyHostCerts
	I0722 10:29:41.706581   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:29:41.706698   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:29:41.706778   14017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:29:41.706847   14017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.addons-362127 san=[127.0.0.1 192.168.39.147 addons-362127 localhost minikube]
	I0722 10:29:41.894425   14017 provision.go:177] copyRemoteCerts
	I0722 10:29:41.894477   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:29:41.894500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:41.897006   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.897330   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:41.897354   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:41.897492   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:41.897655   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:41.897786   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:41.897909   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:41.973624   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:29:41.996692   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:29:42.018990   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:29:42.041253   14017 provision.go:87] duration metric: took 340.435418ms to configureAuth
	I0722 10:29:42.041273   14017 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:29:42.041436   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:29:42.041512   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.043838   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.044105   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.044136   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.044276   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.044459   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.044602   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.044744   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.044906   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:42.045048   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:42.045060   14017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:29:42.291411   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:29:42.291433   14017 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:29:42.291441   14017 main.go:141] libmachine: (addons-362127) Calling .GetURL
	I0722 10:29:42.292571   14017 main.go:141] libmachine: (addons-362127) DBG | Using libvirt version 6000000
	I0722 10:29:42.294571   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.294826   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.294850   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.295023   14017 main.go:141] libmachine: Docker is up and running!
	I0722 10:29:42.295047   14017 main.go:141] libmachine: Reticulating splines...
	I0722 10:29:42.295053   14017 client.go:171] duration metric: took 22.625304136s to LocalClient.Create
	I0722 10:29:42.295073   14017 start.go:167] duration metric: took 22.625352131s to libmachine.API.Create "addons-362127"
	I0722 10:29:42.295086   14017 start.go:293] postStartSetup for "addons-362127" (driver="kvm2")
	I0722 10:29:42.295099   14017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:29:42.295115   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.295351   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:29:42.295387   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.297207   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.297511   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.297540   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.297634   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.297806   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.297966   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.298099   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.374402   14017 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:29:42.378378   14017 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:29:42.378404   14017 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:29:42.378462   14017 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:29:42.378487   14017 start.go:296] duration metric: took 83.3928ms for postStartSetup
	I0722 10:29:42.378511   14017 main.go:141] libmachine: (addons-362127) Calling .GetConfigRaw
	I0722 10:29:42.378959   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:42.381379   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.381820   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.381845   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.382096   14017 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/config.json ...
	I0722 10:29:42.382309   14017 start.go:128] duration metric: took 22.729356958s to createHost
	I0722 10:29:42.382333   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.384569   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.384862   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.384889   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.385044   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.385202   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.385363   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.385500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.385626   14017 main.go:141] libmachine: Using SSH client type: native
	I0722 10:29:42.385776   14017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0722 10:29:42.385786   14017 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:29:42.480657   14017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721644182.455604273
	
	I0722 10:29:42.480680   14017 fix.go:216] guest clock: 1721644182.455604273
	I0722 10:29:42.480690   14017 fix.go:229] Guest: 2024-07-22 10:29:42.455604273 +0000 UTC Remote: 2024-07-22 10:29:42.382323527 +0000 UTC m=+22.826470222 (delta=73.280746ms)
	I0722 10:29:42.480731   14017 fix.go:200] guest clock delta is within tolerance: 73.280746ms
	I0722 10:29:42.480736   14017 start.go:83] releasing machines lock for "addons-362127", held for 22.827889547s
	I0722 10:29:42.480757   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.481015   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:42.483354   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.483723   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.483748   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.483904   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484400   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484561   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:29:42.484665   14017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:29:42.484716   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.484749   14017 ssh_runner.go:195] Run: cat /version.json
	I0722 10:29:42.484771   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:29:42.487283   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487438   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487557   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.487581   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.487738   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.487879   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:42.487896   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.487908   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:42.488036   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:29:42.488089   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.488171   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:29:42.488214   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.488285   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:29:42.488420   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:29:42.585533   14017 ssh_runner.go:195] Run: systemctl --version
	I0722 10:29:42.591184   14017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:29:42.745296   14017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:29:42.750982   14017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:29:42.751031   14017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:29:42.767021   14017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:29:42.767041   14017 start.go:495] detecting cgroup driver to use...
	I0722 10:29:42.767097   14017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:29:42.783278   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:29:42.797108   14017 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:29:42.797156   14017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:29:42.810143   14017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:29:42.823176   14017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:29:42.936166   14017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:29:43.069095   14017 docker.go:233] disabling docker service ...
	I0722 10:29:43.069160   14017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:29:43.083237   14017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:29:43.095562   14017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:29:43.228490   14017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:29:43.343384   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:29:43.357392   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:29:43.374871   14017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:29:43.374932   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.385318   14017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:29:43.385375   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.395737   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.405878   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.415804   14017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:29:43.425968   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.435811   14017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.452530   14017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:29:43.462700   14017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:29:43.472039   14017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:29:43.472084   14017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:29:43.484344   14017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:29:43.493495   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:29:43.608656   14017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:29:43.745676   14017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:29:43.745759   14017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:29:43.750557   14017 start.go:563] Will wait 60s for crictl version
	I0722 10:29:43.750610   14017 ssh_runner.go:195] Run: which crictl
	I0722 10:29:43.754165   14017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:29:43.789788   14017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:29:43.789900   14017 ssh_runner.go:195] Run: crio --version
	I0722 10:29:43.816975   14017 ssh_runner.go:195] Run: crio --version
	I0722 10:29:43.848783   14017 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:29:43.849976   14017 main.go:141] libmachine: (addons-362127) Calling .GetIP
	I0722 10:29:43.852269   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:43.852632   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:29:43.852660   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:29:43.852835   14017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:29:43.856776   14017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:29:43.868436   14017 kubeadm.go:883] updating cluster {Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:29:43.868534   14017 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:43.868576   14017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:29:43.901435   14017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 10:29:43.901482   14017 ssh_runner.go:195] Run: which lz4
	I0722 10:29:43.905129   14017 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 10:29:43.908916   14017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 10:29:43.908936   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 10:29:45.190696   14017 crio.go:462] duration metric: took 1.285590031s to copy over tarball
	I0722 10:29:45.190794   14017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 10:29:47.408165   14017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217340659s)
	I0722 10:29:47.408191   14017 crio.go:469] duration metric: took 2.217463481s to extract the tarball
	I0722 10:29:47.408199   14017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 10:29:47.452401   14017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:29:47.493848   14017 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:29:47.493889   14017 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:29:47.493899   14017 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.3 crio true true} ...
	I0722 10:29:47.494023   14017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-362127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:29:47.494108   14017 ssh_runner.go:195] Run: crio config
	I0722 10:29:47.538075   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:47.538097   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:47.538115   14017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:29:47.538152   14017 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-362127 NodeName:addons-362127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:29:47.538319   14017 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-362127"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:29:47.538390   14017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:29:47.548608   14017 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:29:47.548661   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 10:29:47.558020   14017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:29:47.573925   14017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:29:47.589354   14017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 10:29:47.604625   14017 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0722 10:29:47.608574   14017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:29:47.620223   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:29:47.723199   14017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:29:47.740047   14017 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127 for IP: 192.168.39.147
	I0722 10:29:47.740072   14017 certs.go:194] generating shared ca certs ...
	I0722 10:29:47.740096   14017 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.740246   14017 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:29:47.874497   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt ...
	I0722 10:29:47.874531   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt: {Name:mke882a38fe6f483e6530028b8df28144d29a855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.874703   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key ...
	I0722 10:29:47.874717   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key: {Name:mkf540d4917bbffc298d8aa1a4169d65a42a8673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.874812   14017 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:29:47.973344   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt ...
	I0722 10:29:47.973374   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt: {Name:mke2b7b72f11e82846972309d55ed3d0e72012b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.973545   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key ...
	I0722 10:29:47.973560   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key: {Name:mk744638ea69c3f6193a23844c6a68538dfb44a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:47.973663   14017 certs.go:256] generating profile certs ...
	I0722 10:29:47.973731   14017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key
	I0722 10:29:47.973749   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt with IP's: []
	I0722 10:29:48.236064   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt ...
	I0722 10:29:48.236093   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: {Name:mkfc010ff291afc7aee26ac16e832d5f514edb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.236261   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key ...
	I0722 10:29:48.236275   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.key: {Name:mkf0c231b4b54ef7c9316e71266a716bdfb49393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.236367   14017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed
	I0722 10:29:48.236405   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147]
	I0722 10:29:48.468176   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed ...
	I0722 10:29:48.468208   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed: {Name:mk921cfb0bc1062e3295be5c5ec1a1e46daf48a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.468373   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed ...
	I0722 10:29:48.468409   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed: {Name:mkf336033ff12e37cb73c650a52c869b86c144ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.468506   14017 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt.43abceed -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt
	I0722 10:29:48.468596   14017 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key.43abceed -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key
	I0722 10:29:48.468662   14017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key
	I0722 10:29:48.468684   14017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt with IP's: []
	I0722 10:29:48.766613   14017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt ...
	I0722 10:29:48.766643   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt: {Name:mk22b627da338fc6b9d9dd57a7688665d43c25aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.766810   14017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key ...
	I0722 10:29:48.766824   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key: {Name:mk8bfd98994daf8915ba3441b0b1840e2d93aebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:29:48.767011   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:29:48.767055   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:29:48.767089   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:29:48.767124   14017 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:29:48.767671   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:29:48.793320   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:29:48.819848   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:29:48.848414   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:29:48.872347   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0722 10:29:48.895103   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 10:29:48.920678   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:29:48.945034   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 10:29:48.968049   14017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:29:48.991161   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:29:49.007024   14017 ssh_runner.go:195] Run: openssl version
	I0722 10:29:49.012452   14017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:29:49.022593   14017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.026899   14017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.026947   14017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:29:49.032557   14017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:29:49.042982   14017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:29:49.046988   14017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:29:49.047030   14017 kubeadm.go:392] StartCluster: {Name:addons-362127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-362127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:49.047101   14017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:29:49.047162   14017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:29:49.088580   14017 cri.go:89] found id: ""
	I0722 10:29:49.088650   14017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 10:29:49.101780   14017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 10:29:49.110882   14017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 10:29:49.119802   14017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 10:29:49.119821   14017 kubeadm.go:157] found existing configuration files:
	
	I0722 10:29:49.119853   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 10:29:49.128787   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 10:29:49.128835   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 10:29:49.137716   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 10:29:49.146203   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 10:29:49.146242   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 10:29:49.155104   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 10:29:49.163468   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 10:29:49.163508   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 10:29:49.172239   14017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 10:29:49.180970   14017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 10:29:49.181022   14017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 10:29:49.189794   14017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 10:29:49.386269   14017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 10:29:59.753185   14017 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 10:29:59.753269   14017 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 10:29:59.753377   14017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 10:29:59.753516   14017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 10:29:59.753640   14017 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 10:29:59.753718   14017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 10:29:59.755928   14017 out.go:204]   - Generating certificates and keys ...
	I0722 10:29:59.756024   14017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 10:29:59.756115   14017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 10:29:59.756202   14017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 10:29:59.756281   14017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 10:29:59.756366   14017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 10:29:59.756445   14017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 10:29:59.756522   14017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 10:29:59.756676   14017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-362127 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0722 10:29:59.756738   14017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 10:29:59.756849   14017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-362127 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0722 10:29:59.756906   14017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 10:29:59.756959   14017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 10:29:59.757014   14017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 10:29:59.757086   14017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 10:29:59.757140   14017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 10:29:59.757191   14017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 10:29:59.757235   14017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 10:29:59.757325   14017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 10:29:59.757375   14017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 10:29:59.757441   14017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 10:29:59.757506   14017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 10:29:59.758899   14017 out.go:204]   - Booting up control plane ...
	I0722 10:29:59.758969   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 10:29:59.759063   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 10:29:59.759131   14017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 10:29:59.759258   14017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 10:29:59.759341   14017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 10:29:59.759381   14017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 10:29:59.759491   14017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 10:29:59.759554   14017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 10:29:59.759631   14017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.613572ms
	I0722 10:29:59.759738   14017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 10:29:59.759822   14017 kubeadm.go:310] [api-check] The API server is healthy after 5.501358528s
	I0722 10:29:59.759954   14017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 10:29:59.760096   14017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 10:29:59.760164   14017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 10:29:59.760326   14017 kubeadm.go:310] [mark-control-plane] Marking the node addons-362127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 10:29:59.760406   14017 kubeadm.go:310] [bootstrap-token] Using token: e88oa7.cou2ewfo3a53ksgg
	I0722 10:29:59.762449   14017 out.go:204]   - Configuring RBAC rules ...
	I0722 10:29:59.762541   14017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 10:29:59.762609   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 10:29:59.762714   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 10:29:59.762815   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 10:29:59.762915   14017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 10:29:59.762997   14017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 10:29:59.763103   14017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 10:29:59.763147   14017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 10:29:59.763186   14017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 10:29:59.763191   14017 kubeadm.go:310] 
	I0722 10:29:59.763257   14017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 10:29:59.763273   14017 kubeadm.go:310] 
	I0722 10:29:59.763338   14017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 10:29:59.763344   14017 kubeadm.go:310] 
	I0722 10:29:59.763382   14017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 10:29:59.763436   14017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 10:29:59.763478   14017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 10:29:59.763484   14017 kubeadm.go:310] 
	I0722 10:29:59.763527   14017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 10:29:59.763533   14017 kubeadm.go:310] 
	I0722 10:29:59.763571   14017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 10:29:59.763576   14017 kubeadm.go:310] 
	I0722 10:29:59.763618   14017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 10:29:59.763679   14017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 10:29:59.763735   14017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 10:29:59.763741   14017 kubeadm.go:310] 
	I0722 10:29:59.763814   14017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 10:29:59.763891   14017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 10:29:59.763896   14017 kubeadm.go:310] 
	I0722 10:29:59.763967   14017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e88oa7.cou2ewfo3a53ksgg \
	I0722 10:29:59.764054   14017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 10:29:59.764073   14017 kubeadm.go:310] 	--control-plane 
	I0722 10:29:59.764078   14017 kubeadm.go:310] 
	I0722 10:29:59.764147   14017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 10:29:59.764153   14017 kubeadm.go:310] 
	I0722 10:29:59.764224   14017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e88oa7.cou2ewfo3a53ksgg \
	I0722 10:29:59.764313   14017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 10:29:59.764327   14017 cni.go:84] Creating CNI manager for ""
	I0722 10:29:59.764335   14017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:59.765783   14017 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 10:29:59.766932   14017 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 10:29:59.777690   14017 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 10:29:59.795331   14017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 10:29:59.795411   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:29:59.795418   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-362127 minikube.k8s.io/updated_at=2024_07_22T10_29_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=addons-362127 minikube.k8s.io/primary=true
	I0722 10:29:59.923266   14017 ops.go:34] apiserver oom_adj: -16
	I0722 10:29:59.923425   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:00.424211   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:00.924146   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:01.424410   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:01.924476   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:02.424256   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:02.924350   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:03.424400   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:03.924254   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:04.424118   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:04.924332   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:05.423733   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:05.924069   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:06.423929   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:06.923488   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:07.423545   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:07.924186   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:08.424363   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:08.924306   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:09.423491   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:09.924332   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:10.423814   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:10.923525   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:11.423464   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:11.923450   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:12.423532   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:12.923460   14017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:30:13.002799   14017 kubeadm.go:1113] duration metric: took 13.207457539s to wait for elevateKubeSystemPrivileges
	I0722 10:30:13.002839   14017 kubeadm.go:394] duration metric: took 23.955811499s to StartCluster
	I0722 10:30:13.002857   14017 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:30:13.002982   14017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:30:13.003351   14017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:30:13.003535   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 10:30:13.003557   14017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:30:13.003637   14017 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0722 10:30:13.003767   14017 addons.go:69] Setting yakd=true in profile "addons-362127"
	I0722 10:30:13.003816   14017 addons.go:234] Setting addon yakd=true in "addons-362127"
	I0722 10:30:13.003856   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003866   14017 addons.go:69] Setting ingress-dns=true in profile "addons-362127"
	I0722 10:30:13.003908   14017 addons.go:234] Setting addon ingress-dns=true in "addons-362127"
	I0722 10:30:13.003916   14017 addons.go:69] Setting cloud-spanner=true in profile "addons-362127"
	I0722 10:30:13.003934   14017 addons.go:234] Setting addon cloud-spanner=true in "addons-362127"
	I0722 10:30:13.003938   14017 addons.go:69] Setting registry=true in profile "addons-362127"
	I0722 10:30:13.003949   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003961   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.003963   14017 addons.go:69] Setting gcp-auth=true in profile "addons-362127"
	I0722 10:30:13.003973   14017 addons.go:234] Setting addon registry=true in "addons-362127"
	I0722 10:30:13.003985   14017 mustload.go:65] Loading cluster: addons-362127
	I0722 10:30:13.004000   14017 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-362127"
	I0722 10:30:13.004015   14017 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-362127"
	I0722 10:30:13.004026   14017 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-362127"
	I0722 10:30:13.004040   14017 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-362127"
	I0722 10:30:13.004065   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004170   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:30:13.003938   14017 addons.go:69] Setting helm-tiller=true in profile "addons-362127"
	I0722 10:30:13.004332   14017 addons.go:69] Setting volcano=true in profile "addons-362127"
	I0722 10:30:13.004346   14017 addons.go:234] Setting addon helm-tiller=true in "addons-362127"
	I0722 10:30:13.004355   14017 addons.go:234] Setting addon volcano=true in "addons-362127"
	I0722 10:30:13.004359   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004369   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004395   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004405   14017 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-362127"
	I0722 10:30:13.004408   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004427   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004443   14017 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-362127"
	I0722 10:30:13.004467   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004483   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004503   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004551   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004577   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.003949   14017 addons.go:69] Setting metrics-server=true in profile "addons-362127"
	I0722 10:30:13.004627   14017 addons.go:69] Setting storage-provisioner=true in profile "addons-362127"
	I0722 10:30:13.004654   14017 addons.go:234] Setting addon storage-provisioner=true in "addons-362127"
	I0722 10:30:13.004655   14017 addons.go:234] Setting addon metrics-server=true in "addons-362127"
	I0722 10:30:13.004683   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004701   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.004706   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004722   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004768   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004799   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004806   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004822   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005064   14017 addons.go:69] Setting volumesnapshots=true in profile "addons-362127"
	I0722 10:30:13.005086   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005092   14017 addons.go:234] Setting addon volumesnapshots=true in "addons-362127"
	I0722 10:30:13.005106   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005114   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005125   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.005133   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.004399   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005199   14017 addons.go:69] Setting inspektor-gadget=true in profile "addons-362127"
	I0722 10:30:13.005203   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.004347   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.005221   14017 addons.go:234] Setting addon inspektor-gadget=true in "addons-362127"
	I0722 10:30:13.003902   14017 addons.go:69] Setting default-storageclass=true in profile "addons-362127"
	I0722 10:30:13.005251   14017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-362127"
	I0722 10:30:13.005092   14017 addons.go:69] Setting ingress=true in profile "addons-362127"
	I0722 10:30:13.005256   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005268   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.005272   14017 addons.go:234] Setting addon ingress=true in "addons-362127"
	I0722 10:30:13.003713   14017 config.go:182] Loaded profile config "addons-362127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:30:13.004006   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.006509   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.006821   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.006862   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.006882   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.006901   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.007274   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.009237   14017 out.go:177] * Verifying Kubernetes components...
	I0722 10:30:13.017289   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.017378   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.017432   14017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:30:13.025313   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0722 10:30:13.026184   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.026741   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.026766   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.027097   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.027651   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.027683   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.030060   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0722 10:30:13.030251   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0722 10:30:13.030649   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.030729   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.031179   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.031194   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.031244   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.031267   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.031517   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.031570   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.032145   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.032167   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.032188   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.032197   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.033708   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0722 10:30:13.037649   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.037689   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.038330   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.038365   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.044585   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46675
	I0722 10:30:13.044699   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0722 10:30:13.044858   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I0722 10:30:13.044949   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0722 10:30:13.045514   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046005   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.046027   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.046547   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046692   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046758   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.046841   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.048938   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.048957   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049088   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.049098   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049224   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.049234   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.049288   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.049338   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0722 10:30:13.049765   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050321   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.050362   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.050666   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.050690   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.050757   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050788   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0722 10:30:13.050807   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.050875   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.051116   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.051295   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.051443   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.051499   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.051537   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.052706   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.053111   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.053132   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.053545   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.054120   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.054155   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.054442   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.054790   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.054819   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.055760   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.055777   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.056277   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.056850   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.056883   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.057584   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.057621   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.057672   14017 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-362127"
	I0722 10:30:13.057726   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.058050   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.058076   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.059242   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0722 10:30:13.061004   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.061574   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.061590   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.061994   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.062628   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.062662   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.066808   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0722 10:30:13.067349   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.067903   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.067921   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.068299   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.068517   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.070594   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.072679   14017 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0722 10:30:13.074133   14017 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 10:30:13.074151   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0722 10:30:13.074172   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.077848   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.078433   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.078455   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.078646   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.078859   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.079072   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.079254   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.087075   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0722 10:30:13.087637   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.088145   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.088163   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.088698   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.089337   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.089377   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.099040   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0722 10:30:13.099819   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.100530   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.100550   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.101405   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.101698   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.103601   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.104345   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0722 10:30:13.104505   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I0722 10:30:13.105061   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.105496   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.105512   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.105921   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.106145   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0722 10:30:13.106184   14017 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0722 10:30:13.106646   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.106680   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.106873   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0722 10:30:13.107317   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.107705   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.107778   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0722 10:30:13.107789   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0722 10:30:13.107792   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0722 10:30:13.107810   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.108307   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.108323   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.108484   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.108610   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.108624   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.108949   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.109236   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.109277   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.109698   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.109713   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.110103   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.110141   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.110143   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0722 10:30:13.110246   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0722 10:30:13.110823   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.110859   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.111223   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.111266   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.111330   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.111363   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.111374   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.111397   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.111566   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0722 10:30:13.111658   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.111674   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.111701   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.111803   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.111815   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.111825   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.111972   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.112170   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.112226   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.112271   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.112463   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.113037   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.113054   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.113108   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0722 10:30:13.113243   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.113322   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.113599   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.113658   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0722 10:30:13.113821   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.113837   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.113880   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0722 10:30:13.113889   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0722 10:30:13.114230   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.114352   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.114390   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.114458   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.114642   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.114774   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.114788   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.114838   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.115728   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.115779   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.115858   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.115870   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.116465   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.116502   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.116692   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.116856   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.117346   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.117542   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.117685   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.117696   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.118346   14017 addons.go:234] Setting addon default-storageclass=true in "addons-362127"
	I0722 10:30:13.118379   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.118381   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:13.118750   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.118780   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.119520   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.119808   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.120102   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.120522   14017 out.go:177]   - Using image docker.io/registry:2.8.3
	I0722 10:30:13.120523   14017 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0722 10:30:13.120569   14017 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0722 10:30:13.120783   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.121809   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0722 10:30:13.122481   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.122498   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.122575   14017 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 10:30:13.122597   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0722 10:30:13.122620   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.122952   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.123194   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.123584   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.123831   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:13.123845   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:13.124537   14017 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0722 10:30:13.124605   14017 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0722 10:30:13.124552   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0722 10:30:13.124776   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.125979   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:13.126018   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:13.126034   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:13.126047   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:13.126054   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:13.126195   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.126475   14017 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0722 10:30:13.126486   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0722 10:30:13.126500   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.126575   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:13.127846   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0722 10:30:13.128487   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.128995   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:13.129440   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0722 10:30:13.129960   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.130041   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.130060   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.130061   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0722 10:30:13.130199   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.130375   14017 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 10:30:13.130390   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0722 10:30:13.130400   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.130404   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.130411   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.130456   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.130612   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.130771   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.130823   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.130977   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.131272   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.131323   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.131763   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.131782   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.131817   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.131832   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.132172   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.132294   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.132599   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.132615   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.132929   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0722 10:30:13.133057   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.133109   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.133341   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.133589   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.133637   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.134125   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:13.134168   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:13.134176   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	W0722 10:30:13.134235   14017 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0722 10:30:13.134395   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0722 10:30:13.134986   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.135509   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.135526   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.136121   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.136457   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.136591   14017 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0722 10:30:13.136819   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.137211   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.137230   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.137355   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0722 10:30:13.137404   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0722 10:30:13.137478   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.137953   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I0722 10:30:13.137999   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.138033   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.138186   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.138336   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.138660   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.138672   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.138897   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 10:30:13.138913   14017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 10:30:13.138928   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.138952   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.139124   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.139357   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.139371   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.139434   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.139525   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.139978   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.140347   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0722 10:30:13.140487   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.141446   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0722 10:30:13.142043   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.142493   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0722 10:30:13.142550   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.142503   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0722 10:30:13.142573   14017 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0722 10:30:13.142598   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.142980   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.143001   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.143288   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.143708   14017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 10:30:13.144360   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.144669   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.144810   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.145103   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.145288   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0722 10:30:13.145371   14017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:30:13.145383   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 10:30:13.145397   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.146563   14017 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0722 10:30:13.146848   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.147341   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.147375   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.147651   14017 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0722 10:30:13.147710   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.147717   14017 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0722 10:30:13.147729   14017 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0722 10:30:13.147746   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.147890   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.148224   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.148432   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.148790   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0722 10:30:13.148802   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0722 10:30:13.148817   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.149141   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.149773   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.149798   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.149972   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.150153   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.150312   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.150523   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.152536   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.152897   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153099   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.153125   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153334   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.153557   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.153581   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.153609   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.153773   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.153830   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.153954   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.153994   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.154317   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.154449   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.156087   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0722 10:30:13.156415   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.156986   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.157002   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.157395   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.157518   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0722 10:30:13.157612   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0722 10:30:13.157739   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.157917   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.157992   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.158310   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.158328   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.158444   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.158458   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.158718   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.158780   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.158983   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.159395   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:13.159429   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:13.159528   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.160412   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.161528   14017 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	W0722 10:30:13.162057   14017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48638->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.162089   14017 retry.go:31] will retry after 222.201543ms: ssh: handshake failed: read tcp 192.168.39.1:48638->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.162694   14017 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0722 10:30:13.163523   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0722 10:30:13.163539   14017 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0722 10:30:13.163555   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.165419   14017 out.go:177]   - Using image docker.io/busybox:stable
	I0722 10:30:13.166545   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.166587   14017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 10:30:13.166608   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0722 10:30:13.166626   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.166984   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.167007   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.167175   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.167321   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.167470   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.167598   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.169565   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.169992   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.170015   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.170186   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.170333   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.170479   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.170595   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:13.195866   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0722 10:30:13.196331   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:13.197236   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:13.197257   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:13.197541   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:13.197692   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:13.199127   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:13.199318   14017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 10:30:13.199331   14017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 10:30:13.199344   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:13.202167   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.202558   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:13.202587   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:13.202738   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:13.202916   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:13.203046   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:13.203186   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	W0722 10:30:13.203833   14017 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48668->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.203860   14017 retry.go:31] will retry after 358.673458ms: ssh: handshake failed: read tcp 192.168.39.1:48668->192.168.39.147:22: read: connection reset by peer
	I0722 10:30:13.426152   14017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:30:13.426172   14017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 10:30:13.491038   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0722 10:30:13.506268   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0722 10:30:13.506292   14017 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0722 10:30:13.626071   14017 node_ready.go:35] waiting up to 6m0s for node "addons-362127" to be "Ready" ...
	I0722 10:30:13.629628   14017 node_ready.go:49] node "addons-362127" has status "Ready":"True"
	I0722 10:30:13.629650   14017 node_ready.go:38] duration metric: took 3.541335ms for node "addons-362127" to be "Ready" ...
	I0722 10:30:13.629659   14017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:30:13.637896   14017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:13.649818   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0722 10:30:13.705008   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0722 10:30:13.705037   14017 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0722 10:30:13.713558   14017 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0722 10:30:13.713580   14017 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0722 10:30:13.727642   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0722 10:30:13.747900   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0722 10:30:13.762692   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 10:30:13.762712   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0722 10:30:13.763702   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0722 10:30:13.763718   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0722 10:30:13.782062   14017 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0722 10:30:13.782092   14017 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0722 10:30:13.796051   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:30:13.803750   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0722 10:30:13.803768   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0722 10:30:13.815953   14017 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0722 10:30:13.815974   14017 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0722 10:30:13.825042   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0722 10:30:13.886337   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0722 10:30:13.996265   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0722 10:30:13.996285   14017 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0722 10:30:13.999231   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0722 10:30:13.999241   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0722 10:30:14.047801   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0722 10:30:14.047831   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0722 10:30:14.078501   14017 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0722 10:30:14.078520   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0722 10:30:14.086619   14017 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0722 10:30:14.086638   14017 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0722 10:30:14.087686   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 10:30:14.087708   14017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 10:30:14.156761   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0722 10:30:14.156783   14017 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0722 10:30:14.203561   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 10:30:14.235008   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0722 10:30:14.235034   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0722 10:30:14.240487   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0722 10:30:14.279295   14017 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0722 10:30:14.279320   14017 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0722 10:30:14.290508   14017 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0722 10:30:14.290525   14017 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0722 10:30:14.331552   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0722 10:30:14.331575   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0722 10:30:14.382953   14017 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 10:30:14.382984   14017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 10:30:14.416283   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0722 10:30:14.416310   14017 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0722 10:30:14.464745   14017 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0722 10:30:14.464765   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0722 10:30:14.536244   14017 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0722 10:30:14.536275   14017 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0722 10:30:14.636723   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0722 10:30:14.636744   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0722 10:30:14.638495   14017 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0722 10:30:14.638512   14017 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0722 10:30:14.803141   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 10:30:14.806153   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0722 10:30:14.966748   14017 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:14.966772   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0722 10:30:14.979835   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0722 10:30:14.979867   14017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0722 10:30:14.996275   14017 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0722 10:30:14.996300   14017 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0722 10:30:15.338896   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:15.387338   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0722 10:30:15.387370   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0722 10:30:15.442765   14017 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0722 10:30:15.442792   14017 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0722 10:30:15.644473   14017 pod_ready.go:102] pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace has status "Ready":"False"
	I0722 10:30:15.708318   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0722 10:30:15.708340   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0722 10:30:15.750717   14017 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.324511248s)
	I0722 10:30:15.750753   14017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0722 10:30:15.763091   14017 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 10:30:15.763124   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0722 10:30:16.023332   14017 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 10:30:16.023363   14017 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0722 10:30:16.060541   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0722 10:30:16.255956   14017 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-362127" context rescaled to 1 replicas
	I0722 10:30:16.294218   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0722 10:30:17.711255   14017 pod_ready.go:92] pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.711287   14017 pod_ready.go:81] duration metric: took 4.073364802s for pod "coredns-7db6d8ff4d-kdg7f" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.711301   14017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.812508   14017 pod_ready.go:92] pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.812532   14017 pod_ready.go:81] duration metric: took 101.223088ms for pod "coredns-7db6d8ff4d-rdwgl" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.812545   14017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.918052   14017 pod_ready.go:92] pod "etcd-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.918075   14017 pod_ready.go:81] duration metric: took 105.522311ms for pod "etcd-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.918086   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.971809   14017 pod_ready.go:92] pod "kube-apiserver-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:17.971832   14017 pod_ready.go:81] duration metric: took 53.738027ms for pod "kube-apiserver-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:17.971844   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.094589   14017 pod_ready.go:92] pod "kube-controller-manager-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.094616   14017 pod_ready.go:81] duration metric: took 122.763299ms for pod "kube-controller-manager-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.094629   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2bc4" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.228465   14017 pod_ready.go:92] pod "kube-proxy-w2bc4" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.228490   14017 pod_ready.go:81] duration metric: took 133.85389ms for pod "kube-proxy-w2bc4" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.228500   14017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.565987   14017 pod_ready.go:92] pod "kube-scheduler-addons-362127" in "kube-system" namespace has status "Ready":"True"
	I0722 10:30:18.566015   14017 pod_ready.go:81] duration metric: took 337.508324ms for pod "kube-scheduler-addons-362127" in "kube-system" namespace to be "Ready" ...
	I0722 10:30:18.566026   14017 pod_ready.go:38] duration metric: took 4.936352102s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:30:18.566043   14017 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:30:18.566103   14017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:30:18.867368   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.376292276s)
	I0722 10:30:18.867416   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.217570187s)
	I0722 10:30:18.867427   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867444   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867452   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867464   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867476   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.139804731s)
	I0722 10:30:18.867525   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867539   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867523   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.119594513s)
	I0722 10:30:18.867562   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.071487883s)
	I0722 10:30:18.867573   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867579   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867585   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867589   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867666   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.042600149s)
	I0722 10:30:18.867693   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.867704   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.867974   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868021   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868177   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868200   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868224   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868027   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868048   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868048   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868071   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868345   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868365   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868399   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868082   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.868701   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868712   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.868720   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.869009   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869052   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869090   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869108   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868093   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869161   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.869181   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.869200   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868103   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869275   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869295   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.869909   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.869943   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.869952   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.868110   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870080   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.870092   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.870103   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.868120   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.868130   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870156   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.870166   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:18.870174   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:18.870761   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.870776   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.871244   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.871296   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.871321   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:18.871688   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:18.871734   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:18.871751   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:19.042794   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:19.042817   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:19.043230   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:19.043291   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:20.122398   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0722 10:30:20.122437   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:20.125843   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.126330   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:20.126358   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.126534   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:20.126750   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:20.126918   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:20.127069   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:20.637914   14017 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0722 10:30:20.854680   14017 addons.go:234] Setting addon gcp-auth=true in "addons-362127"
	I0722 10:30:20.854727   14017 host.go:66] Checking if "addons-362127" exists ...
	I0722 10:30:20.855093   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:20.855136   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:20.870284   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0722 10:30:20.870713   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:20.871175   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:20.871192   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:20.871504   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:20.871996   14017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:30:20.872025   14017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:30:20.886670   14017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0722 10:30:20.887053   14017 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:30:20.887510   14017 main.go:141] libmachine: Using API Version  1
	I0722 10:30:20.887530   14017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:30:20.887866   14017 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:30:20.888072   14017 main.go:141] libmachine: (addons-362127) Calling .GetState
	I0722 10:30:20.889626   14017 main.go:141] libmachine: (addons-362127) Calling .DriverName
	I0722 10:30:20.889837   14017 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0722 10:30:20.889860   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHHostname
	I0722 10:30:20.892433   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.892811   14017 main.go:141] libmachine: (addons-362127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:13:55", ip: ""} in network mk-addons-362127: {Iface:virbr1 ExpiryTime:2024-07-22 11:29:34 +0000 UTC Type:0 Mac:52:54:00:5d:13:55 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-362127 Clientid:01:52:54:00:5d:13:55}
	I0722 10:30:20.892835   14017 main.go:141] libmachine: (addons-362127) DBG | domain addons-362127 has defined IP address 192.168.39.147 and MAC address 52:54:00:5d:13:55 in network mk-addons-362127
	I0722 10:30:20.892980   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHPort
	I0722 10:30:20.893178   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHKeyPath
	I0722 10:30:20.893327   14017 main.go:141] libmachine: (addons-362127) Calling .GetSSHUsername
	I0722 10:30:20.893482   14017 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/addons-362127/id_rsa Username:docker}
	I0722 10:30:21.734624   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.848243185s)
	I0722 10:30:21.734647   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.53104502s)
	I0722 10:30:21.734677   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734685   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734690   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734696   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734718   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.494202834s)
	I0722 10:30:21.734756   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734772   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734793   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.931623748s)
	I0722 10:30:21.734821   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.734830   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.734894   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.928713005s)
	I0722 10:30:21.735018   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735027   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735032   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735038   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735052   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735056   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735060   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735064   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735065   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735083   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735091   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735102   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735109   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735117   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735124   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735176   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735202   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735210   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735218   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735224   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.735314   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735329   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735338   14017 addons.go:475] Verifying addon ingress=true in "addons-362127"
	I0722 10:30:21.735476   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735486   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735495   14017 addons.go:475] Verifying addon registry=true in "addons-362127"
	I0722 10:30:21.735529   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.735597   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735644   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735664   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735860   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.735925   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.735946   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.735976   14017 addons.go:475] Verifying addon metrics-server=true in "addons-362127"
	I0722 10:30:21.735556   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.737630   14017 out.go:177] * Verifying registry addon...
	I0722 10:30:21.738063   14017 out.go:177] * Verifying ingress addon...
	I0722 10:30:21.738291   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.738341   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.738383   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.738400   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.738408   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.738639   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.738655   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.740085   14017 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-362127 service yakd-dashboard -n yakd-dashboard
	
	I0722 10:30:21.740325   14017 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0722 10:30:21.740425   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0722 10:30:21.749906   14017 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0722 10:30:21.749922   14017 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0722 10:30:21.749933   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:21.749930   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:21.768058   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.768081   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.768442   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.768487   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.768495   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.793332   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.454399323s)
	W0722 10:30:21.793382   14017 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 10:30:21.793421   14017 retry.go:31] will retry after 166.70586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0722 10:30:21.793462   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.732876447s)
	I0722 10:30:21.793510   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.793526   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.793792   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.793811   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.793826   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:21.793834   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:21.794046   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:21.794089   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:21.794104   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:21.960836   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0722 10:30:22.253459   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:22.254491   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:22.758511   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:22.766317   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:22.795283   14017 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.229154897s)
	I0722 10:30:22.795330   14017 api_server.go:72] duration metric: took 9.791741777s to wait for apiserver process to appear ...
	I0722 10:30:22.795340   14017 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:30:22.795364   14017 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0722 10:30:22.795356   14017 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.905497122s)
	I0722 10:30:22.795371   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.501103542s)
	I0722 10:30:22.795564   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:22.795580   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:22.795856   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:22.795880   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:22.795890   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:22.795926   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:22.796208   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:22.796222   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:22.796232   14017 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-362127"
	I0722 10:30:22.797378   14017 out.go:177] * Verifying csi-hostpath-driver addon...
	I0722 10:30:22.797377   14017 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0722 10:30:22.798969   14017 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0722 10:30:22.799720   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0722 10:30:22.800175   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0722 10:30:22.800191   14017 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0722 10:30:22.831609   14017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0722 10:30:22.831629   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:22.849605   14017 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0722 10:30:22.853692   14017 api_server.go:141] control plane version: v1.30.3
	I0722 10:30:22.853719   14017 api_server.go:131] duration metric: took 58.372032ms to wait for apiserver health ...
	I0722 10:30:22.853730   14017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:30:22.879001   14017 system_pods.go:59] 19 kube-system pods found
	I0722 10:30:22.879029   14017 system_pods.go:61] "coredns-7db6d8ff4d-kdg7f" [24a11171-e5fb-488e-b75e-bbfffd042dc4] Running
	I0722 10:30:22.879034   14017 system_pods.go:61] "coredns-7db6d8ff4d-rdwgl" [10f869a5-d53d-4fc2-94d5-cab1e86811b8] Running
	I0722 10:30:22.879040   14017 system_pods.go:61] "csi-hostpath-attacher-0" [556914c5-386d-44c4-acde-a28f10ecd9a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0722 10:30:22.879045   14017 system_pods.go:61] "csi-hostpath-resizer-0" [ae0dd06b-0088-4667-a538-82fd9abe6baf] Pending
	I0722 10:30:22.879052   14017 system_pods.go:61] "csi-hostpathplugin-hhxpr" [bc97fa01-6616-4254-93df-9873804b1648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0722 10:30:22.879057   14017 system_pods.go:61] "etcd-addons-362127" [891099bd-687b-4464-8fe2-2d076f624f4f] Running
	I0722 10:30:22.879061   14017 system_pods.go:61] "kube-apiserver-addons-362127" [5a73f7d1-40d1-4d7a-adc9-58ad4eade2c4] Running
	I0722 10:30:22.879064   14017 system_pods.go:61] "kube-controller-manager-addons-362127" [98562678-7e43-4123-bb91-b800b0438089] Running
	I0722 10:30:22.879069   14017 system_pods.go:61] "kube-ingress-dns-minikube" [f2028cf5-46d0-41bc-b6b8-bc8e75607ab4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0722 10:30:22.879072   14017 system_pods.go:61] "kube-proxy-w2bc4" [fff33042-273b-43a2-b72e-7c8a8e6df754] Running
	I0722 10:30:22.879076   14017 system_pods.go:61] "kube-scheduler-addons-362127" [bbe6aea9-80e6-4242-9e26-782460721059] Running
	I0722 10:30:22.879080   14017 system_pods.go:61] "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 10:30:22.879086   14017 system_pods.go:61] "nvidia-device-plugin-daemonset-2k5sr" [2de5556d-cd17-43f7-ba1d-8cc5e131883f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0722 10:30:22.879094   14017 system_pods.go:61] "registry-656c9c8d9c-4sfgx" [b3bc8b0a-e99b-4bf9-aed3-da909aeab28c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0722 10:30:22.879098   14017 system_pods.go:61] "registry-proxy-7tgcs" [30014df8-8abc-48a5-85ce-7a4ab5e79732] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0722 10:30:22.879107   14017 system_pods.go:61] "snapshot-controller-745499f584-m5h79" [656ece8c-0bbc-4456-be78-2c1741b0719e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.879117   14017 system_pods.go:61] "snapshot-controller-745499f584-z65vw" [0a051515-d3ec-40cb-a825-f274b48a611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.879123   14017 system_pods.go:61] "storage-provisioner" [ca3da52f-e625-4fbf-8bf7-39f0bd596c5c] Running
	I0722 10:30:22.879128   14017 system_pods.go:61] "tiller-deploy-6677d64bcd-89cmg" [4311f07e-4fde-45b6-ab03-28badd1c17a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0722 10:30:22.879133   14017 system_pods.go:74] duration metric: took 25.398715ms to wait for pod list to return data ...
	I0722 10:30:22.879141   14017 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:30:22.884021   14017 default_sa.go:45] found service account: "default"
	I0722 10:30:22.884039   14017 default_sa.go:55] duration metric: took 4.890859ms for default service account to be created ...
	I0722 10:30:22.884047   14017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:30:22.909036   14017 system_pods.go:86] 19 kube-system pods found
	I0722 10:30:22.909061   14017 system_pods.go:89] "coredns-7db6d8ff4d-kdg7f" [24a11171-e5fb-488e-b75e-bbfffd042dc4] Running
	I0722 10:30:22.909068   14017 system_pods.go:89] "coredns-7db6d8ff4d-rdwgl" [10f869a5-d53d-4fc2-94d5-cab1e86811b8] Running
	I0722 10:30:22.909074   14017 system_pods.go:89] "csi-hostpath-attacher-0" [556914c5-386d-44c4-acde-a28f10ecd9a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0722 10:30:22.909082   14017 system_pods.go:89] "csi-hostpath-resizer-0" [ae0dd06b-0088-4667-a538-82fd9abe6baf] Pending
	I0722 10:30:22.909091   14017 system_pods.go:89] "csi-hostpathplugin-hhxpr" [bc97fa01-6616-4254-93df-9873804b1648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0722 10:30:22.909096   14017 system_pods.go:89] "etcd-addons-362127" [891099bd-687b-4464-8fe2-2d076f624f4f] Running
	I0722 10:30:22.909101   14017 system_pods.go:89] "kube-apiserver-addons-362127" [5a73f7d1-40d1-4d7a-adc9-58ad4eade2c4] Running
	I0722 10:30:22.909105   14017 system_pods.go:89] "kube-controller-manager-addons-362127" [98562678-7e43-4123-bb91-b800b0438089] Running
	I0722 10:30:22.909115   14017 system_pods.go:89] "kube-ingress-dns-minikube" [f2028cf5-46d0-41bc-b6b8-bc8e75607ab4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0722 10:30:22.909119   14017 system_pods.go:89] "kube-proxy-w2bc4" [fff33042-273b-43a2-b72e-7c8a8e6df754] Running
	I0722 10:30:22.909124   14017 system_pods.go:89] "kube-scheduler-addons-362127" [bbe6aea9-80e6-4242-9e26-782460721059] Running
	I0722 10:30:22.909129   14017 system_pods.go:89] "metrics-server-c59844bb4-c7dpf" [7d0a2a6c-b7cf-488c-97d6-3fb459a706c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 10:30:22.909136   14017 system_pods.go:89] "nvidia-device-plugin-daemonset-2k5sr" [2de5556d-cd17-43f7-ba1d-8cc5e131883f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0722 10:30:22.909144   14017 system_pods.go:89] "registry-656c9c8d9c-4sfgx" [b3bc8b0a-e99b-4bf9-aed3-da909aeab28c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0722 10:30:22.909152   14017 system_pods.go:89] "registry-proxy-7tgcs" [30014df8-8abc-48a5-85ce-7a4ab5e79732] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0722 10:30:22.909158   14017 system_pods.go:89] "snapshot-controller-745499f584-m5h79" [656ece8c-0bbc-4456-be78-2c1741b0719e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.909166   14017 system_pods.go:89] "snapshot-controller-745499f584-z65vw" [0a051515-d3ec-40cb-a825-f274b48a611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0722 10:30:22.909170   14017 system_pods.go:89] "storage-provisioner" [ca3da52f-e625-4fbf-8bf7-39f0bd596c5c] Running
	I0722 10:30:22.909176   14017 system_pods.go:89] "tiller-deploy-6677d64bcd-89cmg" [4311f07e-4fde-45b6-ab03-28badd1c17a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0722 10:30:22.909183   14017 system_pods.go:126] duration metric: took 25.13136ms to wait for k8s-apps to be running ...
	I0722 10:30:22.909190   14017 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:30:22.909232   14017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:30:23.005410   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0722 10:30:23.005434   14017 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0722 10:30:23.122393   14017 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 10:30:23.122430   14017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0722 10:30:23.245456   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:23.245781   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:23.256888   14017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0722 10:30:23.305638   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:23.746594   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:23.749829   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:23.806014   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:24.265525   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:24.271197   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:24.340077   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:24.667430   14017 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.758169956s)
	I0722 10:30:24.667465   14017 system_svc.go:56] duration metric: took 1.758270111s WaitForService to wait for kubelet
	I0722 10:30:24.667476   14017 kubeadm.go:582] duration metric: took 11.663887113s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:30:24.667500   14017 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:30:24.667435   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.706563354s)
	I0722 10:30:24.667585   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:24.667604   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:24.667851   14017 main.go:141] libmachine: (addons-362127) DBG | Closing plugin on server side
	I0722 10:30:24.667856   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:24.667887   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:24.667900   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:24.667912   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:24.668136   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:24.668152   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:24.670158   14017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:30:24.670177   14017 node_conditions.go:123] node cpu capacity is 2
	I0722 10:30:24.670188   14017 node_conditions.go:105] duration metric: took 2.682507ms to run NodePressure ...
	I0722 10:30:24.670200   14017 start.go:241] waiting for startup goroutines ...
	I0722 10:30:24.744901   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:24.745329   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:24.808513   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:25.033452   14017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.776527126s)
	I0722 10:30:25.033513   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:25.033533   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:25.033822   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:25.033840   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:25.033849   14017 main.go:141] libmachine: Making call to close driver server
	I0722 10:30:25.033859   14017 main.go:141] libmachine: (addons-362127) Calling .Close
	I0722 10:30:25.034089   14017 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:30:25.034107   14017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:30:25.035514   14017 addons.go:475] Verifying addon gcp-auth=true in "addons-362127"
	I0722 10:30:25.036900   14017 out.go:177] * Verifying gcp-auth addon...
	I0722 10:30:25.038798   14017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0722 10:30:25.050487   14017 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0722 10:30:25.050509   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:25.245914   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:25.246358   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:25.306026   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:25.543116   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:25.745609   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:25.746031   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:25.805369   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:26.041795   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:26.245866   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:26.247845   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:26.305545   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:26.542336   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:26.876142   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:26.876285   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:26.879274   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:27.042804   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:27.246859   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:27.247184   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:27.305268   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:27.543357   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:27.745784   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:27.747254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:27.806254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:28.042325   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:28.245450   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:28.246169   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:28.305166   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:28.543263   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:28.746659   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:28.746989   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:28.805493   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:29.044426   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:29.245559   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:29.249244   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:29.307159   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:29.542897   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:29.746515   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:29.753180   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:29.807348   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:30.042868   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:30.246567   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:30.246584   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:30.305137   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:30.543073   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:30.746622   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:30.746686   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:30.805599   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:31.365157   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:31.365316   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:31.366006   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:31.367186   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:31.542286   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:31.746482   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:31.747845   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:31.804953   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:32.042405   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:32.247637   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:32.247950   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:32.304971   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:32.543087   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:32.746162   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:32.747930   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:32.804439   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:33.041992   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:33.246694   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:33.246967   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:33.305261   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:33.543074   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:33.747514   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:33.747687   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:33.806069   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:34.043040   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:34.246018   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:34.247597   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:34.304936   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:34.542908   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:34.745659   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:34.747782   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:34.806416   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:35.042689   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:35.244923   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:35.246139   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:35.304655   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:35.543703   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:35.747035   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:35.747273   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:35.806360   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:36.043412   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:36.246144   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:36.246213   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:36.305576   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:36.543640   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:36.750085   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:36.750289   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:36.805463   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:37.043240   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:37.246757   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:37.246945   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:37.306475   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:37.543171   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:37.747058   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:37.747314   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:37.808188   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:38.042155   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:38.246425   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:38.249297   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:38.304982   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:38.543471   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:38.745586   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:38.748301   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:38.804592   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:39.042707   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:39.246090   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:39.246332   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:39.307764   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:39.542113   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:39.745769   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:39.746070   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:39.804509   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:40.042010   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:40.245489   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:40.245714   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:40.305906   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:40.974337   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:40.974891   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:40.975164   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:40.976172   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:41.043562   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:41.245266   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:41.246995   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:41.305043   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:41.555348   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:41.747069   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:41.747223   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:41.808560   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:42.042721   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:42.245464   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:42.249003   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:42.305995   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:42.542365   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:42.745971   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:42.747086   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:42.804969   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:43.043138   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:43.245540   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:43.245608   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:43.305541   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:43.542735   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:43.746245   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:43.746455   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:43.805117   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:44.042965   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:44.245948   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:44.246037   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:44.305908   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:44.542481   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:44.746945   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:44.747082   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:44.807255   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:45.043759   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:45.246298   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:45.248510   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:45.304836   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:45.542731   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:45.746017   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:45.746211   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:45.805939   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:46.042073   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:46.245755   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:46.246512   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:46.305340   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:46.553999   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:46.745582   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:46.745768   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:46.805827   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:47.042685   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:47.246090   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:47.247169   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0722 10:30:47.304595   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:47.544504   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:47.746233   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:47.746303   14017 kapi.go:107] duration metric: took 26.005874052s to wait for kubernetes.io/minikube-addons=registry ...
	I0722 10:30:47.806426   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:48.042731   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:48.244924   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:48.305394   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:48.656975   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:48.745084   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:48.806114   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:49.042355   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:49.245137   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:49.307983   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:49.544096   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:49.745342   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:49.805952   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:50.042250   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:50.244762   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:50.305676   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:50.554215   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:50.745100   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:50.806898   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:51.042405   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:51.246301   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:51.305951   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:51.543136   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:51.746678   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:51.805508   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:52.042260   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:52.245098   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:52.305549   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:52.542201   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:52.744827   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:52.805220   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:53.043170   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:53.245570   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:53.306606   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:53.544072   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:53.747296   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:53.806586   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:54.045486   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:54.244466   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:54.305178   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:54.548251   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:54.749307   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:54.806257   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:55.043393   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:55.245338   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:55.305105   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:55.542502   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:55.744165   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:55.806355   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:56.043014   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:56.245030   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:56.305660   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:56.544799   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:56.745402   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:56.806148   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:57.153686   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:57.244642   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:57.313837   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:57.542467   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:57.750949   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:57.806023   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:58.046735   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:58.246980   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:58.308882   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:58.542564   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:58.748134   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:58.806065   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:59.042995   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:59.245176   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:59.306347   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:30:59.543409   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:30:59.745202   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:30:59.805518   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:00.042449   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:00.244641   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:00.305582   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:00.543570   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:00.744474   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:00.805028   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:01.042339   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:01.245153   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:01.305577   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:01.542005   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:01.745983   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.237000   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:02.237625   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:02.245024   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.305549   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:02.546283   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:02.745545   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:02.809254   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:03.044282   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:03.245062   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:03.305062   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:03.542900   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:03.744600   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:03.806064   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:04.042830   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:04.244754   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:04.305182   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:04.542698   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:04.745513   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:04.806480   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:05.042030   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:05.245136   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:05.306444   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:05.542491   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.086964   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.088051   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.089360   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:06.244124   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.308349   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:06.547978   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:06.744931   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:06.804921   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:07.042391   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:07.245026   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:07.304867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:07.543342   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:07.752711   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:07.804941   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:08.044196   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:08.244975   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:08.305273   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:08.543432   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:08.747215   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:08.806411   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:09.042431   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:09.246350   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:09.305600   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:09.543251   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:09.745040   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:09.808931   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:10.042111   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:10.244930   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:10.307112   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:10.547585   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.136092   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.137164   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.137260   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:11.246520   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.304487   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:11.542373   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:11.746274   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:11.811416   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:12.048882   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:12.244441   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:12.310901   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:12.543092   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:12.745226   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:12.806244   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:13.045020   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:13.245256   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:13.308730   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:13.542508   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:13.744736   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:13.805615   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:14.043529   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:14.245632   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:14.305867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:14.543053   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:14.745097   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:14.806136   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:15.042368   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:15.252649   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:15.311633   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:15.543434   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:15.744373   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:15.804542   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:16.042770   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:16.244665   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:16.306810   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:16.542211   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:16.744777   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:16.805232   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:17.042867   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:17.244807   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:17.304997   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:17.543293   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:17.747942   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:17.805247   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:18.042692   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:18.244757   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:18.304906   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:18.542673   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:18.745183   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:18.806048   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:19.045827   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:19.247573   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:19.305023   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:19.543560   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:19.746348   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:19.810716   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0722 10:31:20.043419   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:20.243921   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:20.305230   14017 kapi.go:107] duration metric: took 57.505506674s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0722 10:31:20.542759   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:20.744679   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:21.042897   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:21.245130   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:21.542964   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:21.745295   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:22.042036   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:22.244675   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:22.542849   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:22.745213   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:23.043263   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:23.245095   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:23.542913   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:23.745006   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:24.042653   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:24.244095   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:24.542690   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:24.745071   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:25.043595   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:25.244318   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:25.542193   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:25.745792   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:26.042623   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:26.244030   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:26.543044   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:26.745099   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:27.042640   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:27.244840   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:27.542177   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:27.744865   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:28.042939   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:28.245305   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:28.543949   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:28.746147   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:29.043150   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:29.246028   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:29.542823   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:29.744670   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:30.042597   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:30.243915   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:30.542777   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:30.746458   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:31.042746   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:31.245490   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:31.543031   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:31.745016   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:32.042959   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:32.244530   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:32.542846   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:32.744493   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:33.042366   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:33.245653   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:33.924658   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:33.926897   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:34.042466   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:34.251612   14017 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0722 10:31:34.548332   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:34.744688   14017 kapi.go:107] duration metric: took 1m13.004359376s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0722 10:31:35.054037   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:35.542697   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:36.045848   14017 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0722 10:31:36.542417   14017 kapi.go:107] duration metric: took 1m11.503612529s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0722 10:31:36.544014   14017 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-362127 cluster.
	I0722 10:31:36.545191   14017 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0722 10:31:36.546441   14017 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0722 10:31:36.547884   14017 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0722 10:31:36.549263   14017 addons.go:510] duration metric: took 1m23.5456299s for enable addons: enabled=[storage-provisioner cloud-spanner helm-tiller nvidia-device-plugin ingress-dns storage-provisioner-rancher metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0722 10:31:36.549298   14017 start.go:246] waiting for cluster config update ...
	I0722 10:31:36.549313   14017 start.go:255] writing updated cluster config ...
	I0722 10:31:36.549581   14017 ssh_runner.go:195] Run: rm -f paused
	I0722 10:31:36.599936   14017 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 10:31:36.601881   14017 out.go:177] * Done! kubectl is now configured to use "addons-362127" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.168202138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644650168176325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ff1941c-5d62-468d-9c71-687813cbb696 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.168725742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e06698d2-dcb8-4ca9-8af7-ec5fe2a7ccda name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.168791840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e06698d2-dcb8-4ca9-8af7-ec5fe2a7ccda name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.169063294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644
262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandb
oxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff
4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821ccd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e06698d2-dcb8-4ca9-8af7-ec5fe2a7ccda name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.205617118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1412eed6-18f9-4520-9754-ab6fde47fd3e name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.205689084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1412eed6-18f9-4520-9754-ab6fde47fd3e name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.207025802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55264797-0c27-4d0a-a9d0-ad85410da0d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.208279551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644650208251929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55264797-0c27-4d0a-a9d0-ad85410da0d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.208830758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e439619-006d-4441-810b-61cb8b49d0a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.208902420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e439619-006d-4441-810b-61cb8b49d0a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.209163659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644
262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandb
oxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff
4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821ccd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e439619-006d-4441-810b-61cb8b49d0a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.242601141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25cd6ec7-54c6-473c-98ce-75ec35e6ae8c name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.242689802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25cd6ec7-54c6-473c-98ce-75ec35e6ae8c name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.243643963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76a46841-e651-4445-94df-e0916cda98a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.244900541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644650244877708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76a46841-e651-4445-94df-e0916cda98a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.245486586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b05a7bb-cd30-4e14-8814-0dadbec33bf7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.245561871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b05a7bb-cd30-4e14-8814-0dadbec33bf7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.245834660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644
262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandb
oxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff
4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821ccd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b05a7bb-cd30-4e14-8814-0dadbec33bf7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.286988253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65edc06f-6a98-4cab-97b8-84d24a585a1b name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.287073112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65edc06f-6a98-4cab-97b8-84d24a585a1b name=/runtime.v1.RuntimeService/Version
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.288165422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8793dc7e-bee1-4af8-a74b-105c2d552a37 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.289588131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721644650289474875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580633,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8793dc7e-bee1-4af8-a74b-105c2d552a37 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.290148133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0c27b39-311a-4cce-98ec-019e51941be3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.290218052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0c27b39-311a-4cce-98ec-019e51941be3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:37:30 addons-362127 crio[685]: time="2024-07-22 10:37:30.290604266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:800f46b915ccc989770628b7ef110a36e03a045d523dad4fb4b6b43da4e30d08,PodSandboxId:a4761b99cae6cefcf03c97a1aa28ec40d1095292fd0463fa10d356e56d3b3983,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721644465474532720,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-lj5kn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43b1d5c8-b098-4afc-b72d-a5e7c55e8230,},Annotations:map[string]string{io.kubernetes.container.hash: 80638108,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73770316dba17d1304dae312faad6d9decde06687f804b62f2b480a60173a1f4,PodSandboxId:32307f7379d78fe5cf030b62b505697b66a8aae49e78bfe4d9e87f912d5e97cc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721644328376481564,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92b935b-6089-4609-bf2a-f636364a6400,},Annotations:map[string]string{io.kubernet
es.container.hash: 7365ae81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcc9665ac1478218288ecc39835278145d4508db1fed0c9cb762c6c8743d35e,PodSandboxId:ec78a497784b7b5c2bb3a7b215d06bb6694aa37311d0dcd2a81d24b05cfcf74c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721644302838909713,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-25xv5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: a1da9ddd-aa30-431f-8b6d-4f19b1f7d384,},Annotations:map[string]string{io.kubernetes.container.hash: 86c15fa3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29,PodSandboxId:d7d452b7fdf1ade8b12c048b644023212faf4a2227ff9f507a81f80ec63f96ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721644295978944281,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5s6sz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 252ee88a-9e97-4f05-888e-6ffd4a637403,},Annotations:map[string]string{io.kubernetes.container.hash: 43bfb3ae,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894be7fb7c0a807b7c662295b05cec0f6d95d5d1bc597309d4250ed48d7d09de,PodSandboxId:efc9f77cdcb60ea0f17a9bef4832e0965b3f2345e28c7006b3519170f9e6787b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721644
262442510447,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-6h47n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 75bce171-cade-4a90-afba-510f2e9fb3ce,},Annotations:map[string]string{io.kubernetes.container.hash: 53151e59,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0,PodSandboxId:854ef3e35e4e76b896ebd8a6beb512a4cd95de01c58fdd6e561708e8d6d29582,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721644257260184770,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c7dpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d0a2a6c-b7cf-488c-97d6-3fb459a706c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12ef7c28,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336,PodSandboxId:6ba14bf1d6d237986b92ccf1497b6991fbc64a0651e36f09e826149911e3d28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644220094314339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3da52f-e625-4fbf-8bf7-39f0bd596c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 56a3380a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6,PodSandboxId:8b050e4257f8ca8fb66e7b9aaa2e1ce7c7945c7ccbb8c11fabfd0a443b087499,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721644215090459391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rdwgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f869a5-d53d-4fc2-94d5-cab1e86811b8,},Annotations:map[string]string{io.kubernetes.container.hash: 890fcdbc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4,PodSandb
oxId:027d60d52d84ed76e305f40fa04758485e5c25626399e3ec0c93c17ed58ba809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644213921900076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2bc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff33042-273b-43a2-b72e-7c8a8e6df754,},Annotations:map[string]string{io.kubernetes.container.hash: aca09f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06,PodSandboxId:91c461356ca15c30aa43669c0cff
4ea42d2937d49be2c27be9cd808b5dc09baf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644193426779498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4905a3c9eb7bc4f54b167e5a235e510,},Annotations:map[string]string{io.kubernetes.container.hash: b2869ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382,PodSandboxId:4c5d47da23c1cd808a31a160a186f4058bf06b6949b4fe2e592f963e23e6192a,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644193451565864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e369930aaf656658263c0657bf4d260,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da,PodSandboxId:a4b8b78d8e3a48d821ccd264f7bc3347e499661946cae127409dcc381a2c8637,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644193457041806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2b929fed983322acba41469dd7b540,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e,PodSandboxId:f5aa39a9d64e91d19e7598d6ecb2cccbe5b5acf5480d595372a5fc59ea209250,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644193399117470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-362127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea7606da16847bd79a635784b5bb097,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0c27b39-311a-4cce-98ec-019e51941be3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	800f46b915ccc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   a4761b99cae6c       hello-world-app-6778b5fc9f-lj5kn
	73770316dba17       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   32307f7379d78       nginx
	0fcc9665ac147       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   ec78a497784b7       headlamp-7867546754-25xv5
	8d97614ce321b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   d7d452b7fdf1a       gcp-auth-5db96cd9b4-5s6sz
	894be7fb7c0a8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   efc9f77cdcb60       yakd-dashboard-799879c74f-6h47n
	af665d7c09f29       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   854ef3e35e4e7       metrics-server-c59844bb4-c7dpf
	a023d34393226       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   6ba14bf1d6d23       storage-provisioner
	3009a540031d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   8b050e4257f8c       coredns-7db6d8ff4d-rdwgl
	1038681b91ded       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   027d60d52d84e       kube-proxy-w2bc4
	1cbab7bd85e1c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   a4b8b78d8e3a4       kube-controller-manager-addons-362127
	8095e98c3220c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   4c5d47da23c1c       kube-scheduler-addons-362127
	ca87cc163cf9b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   91c461356ca15       etcd-addons-362127
	9243bc8ee19c7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   f5aa39a9d64e9       kube-apiserver-addons-362127
	
	
	==> coredns [3009a540031d07af2dafd5a461ece3fc0a592c78dffca549823a7a45d64884c6] <==
	[INFO] 10.244.0.6:52006 - 20009 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119998s
	[INFO] 10.244.0.6:39421 - 60181 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163984s
	[INFO] 10.244.0.6:39421 - 14870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192082s
	[INFO] 10.244.0.6:48467 - 53622 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063444s
	[INFO] 10.244.0.6:48467 - 63600 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063352s
	[INFO] 10.244.0.6:36840 - 55911 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084433s
	[INFO] 10.244.0.6:36840 - 46693 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058623s
	[INFO] 10.244.0.6:35631 - 38791 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000127701s
	[INFO] 10.244.0.6:35631 - 26747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047442s
	[INFO] 10.244.0.6:35951 - 4450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051749s
	[INFO] 10.244.0.6:35951 - 15200 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023281s
	[INFO] 10.244.0.6:55827 - 30363 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048992s
	[INFO] 10.244.0.6:55827 - 31877 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039092s
	[INFO] 10.244.0.6:35091 - 42481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039164s
	[INFO] 10.244.0.6:35091 - 23283 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003734s
	[INFO] 10.244.0.22:36695 - 44770 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000237854s
	[INFO] 10.244.0.22:51732 - 13928 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000916257s
	[INFO] 10.244.0.22:53833 - 38195 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115203s
	[INFO] 10.244.0.22:36418 - 7820 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075271s
	[INFO] 10.244.0.22:40258 - 49446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126748s
	[INFO] 10.244.0.22:47003 - 64351 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078692s
	[INFO] 10.244.0.22:49460 - 40159 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000725386s
	[INFO] 10.244.0.22:53756 - 4990 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000411581s
	[INFO] 10.244.0.26:42406 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000345393s
	[INFO] 10.244.0.26:43937 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178061s
	
	
	==> describe nodes <==
	Name:               addons-362127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-362127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=addons-362127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_29_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-362127
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:29:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-362127
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:34:35 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:34:35 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:34:35 +0000   Mon, 22 Jul 2024 10:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:34:35 +0000   Mon, 22 Jul 2024 10:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    addons-362127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cde07d4e07b438db452d7848feab09e
	  System UUID:                4cde07d4-e07b-438d-b452-d7848feab09e
	  Boot ID:                    1a54dee2-ee71-4081-88cc-549dd9770d8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-lj5kn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  gcp-auth                    gcp-auth-5db96cd9b4-5s6sz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  headlamp                    headlamp-7867546754-25xv5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 coredns-7db6d8ff4d-rdwgl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m18s
	  kube-system                 etcd-addons-362127                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-apiserver-addons-362127             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-addons-362127    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-proxy-w2bc4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-scheduler-addons-362127             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 metrics-server-c59844bb4-c7dpf           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  yakd-dashboard              yakd-dashboard-799879c74f-6h47n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m15s  kube-proxy       
	  Normal  Starting                 7m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s  kubelet          Node addons-362127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s  kubelet          Node addons-362127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s  kubelet          Node addons-362127 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m30s  kubelet          Node addons-362127 status is now: NodeReady
	  Normal  RegisteredNode           7m19s  node-controller  Node addons-362127 event: Registered Node addons-362127 in Controller
	
	
	==> dmesg <==
	[  +0.063696] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.481334] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.088524] kauditd_printk_skb: 69 callbacks suppressed
	[Jul22 10:30] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.572833] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +4.895309] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.003548] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.977644] kauditd_printk_skb: 100 callbacks suppressed
	[ +25.678501] kauditd_printk_skb: 30 callbacks suppressed
	[Jul22 10:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.075909] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.240652] kauditd_printk_skb: 75 callbacks suppressed
	[  +6.322502] kauditd_printk_skb: 34 callbacks suppressed
	[ +14.987538] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.639785] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.002030] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.078084] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.501698] kauditd_printk_skb: 33 callbacks suppressed
	[Jul22 10:32] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.442949] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.531902] kauditd_printk_skb: 3 callbacks suppressed
	[ +20.944679] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.263155] kauditd_printk_skb: 33 callbacks suppressed
	[Jul22 10:34] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.348455] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [ca87cc163cf9b4e4096bb74373a2a1ae9bbed994f549306389e78c8c94ab7f06] <==
	{"level":"info","ts":"2024-07-22T10:31:11.119593Z","caller":"traceutil/trace.go:171","msg":"trace[784481501] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"275.805133ms","start":"2024-07-22T10:31:10.84378Z","end":"2024-07-22T10:31:11.119585Z","steps":["trace[784481501] 'process raft request'  (duration: 275.311707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.905938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.45914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8826370110652983807 > lease_revoke:<id:7a7d90d9fd98a95a>","response":"size:28"}
	{"level":"info","ts":"2024-07-22T10:31:33.906085Z","caller":"traceutil/trace.go:171","msg":"trace[1605961939] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1169; }","duration":"379.98362ms","start":"2024-07-22T10:31:33.526089Z","end":"2024-07-22T10:31:33.906073Z","steps":["trace[1605961939] 'read index received'  (duration: 119.049694ms)","trace[1605961939] 'applied index is now lower than readState.Index'  (duration: 260.932927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T10:31:33.906425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.309456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-22T10:31:33.906491Z","caller":"traceutil/trace.go:171","msg":"trace[1542156137] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1135; }","duration":"380.414363ms","start":"2024-07-22T10:31:33.526066Z","end":"2024-07-22T10:31:33.90648Z","steps":["trace[1542156137] 'agreement among raft nodes before linearized reading'  (duration: 380.140913ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.906545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:33.526053Z","time spent":"380.478449ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-22T10:31:33.906637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.961107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"warn","ts":"2024-07-22T10:31:33.906499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.381021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-22T10:31:33.906802Z","caller":"traceutil/trace.go:171","msg":"trace[490097682] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1135; }","duration":"353.776233ms","start":"2024-07-22T10:31:33.553016Z","end":"2024-07-22T10:31:33.906792Z","steps":["trace[490097682] 'agreement among raft nodes before linearized reading'  (duration: 353.378648ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:31:33.906936Z","caller":"traceutil/trace.go:171","msg":"trace[1879019766] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1135; }","duration":"178.088655ms","start":"2024-07-22T10:31:33.72865Z","end":"2024-07-22T10:31:33.906739Z","steps":["trace[1879019766] 'agreement among raft nodes before linearized reading'  (duration: 177.870288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:33.906919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:33.553004Z","time spent":"353.901699ms","remote":"127.0.0.1:59522","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":7,"response size":30,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	{"level":"info","ts":"2024-07-22T10:31:45.157704Z","caller":"traceutil/trace.go:171","msg":"trace[675707751] linearizableReadLoop","detail":"{readStateIndex:1299; appliedIndex:1298; }","duration":"441.621887ms","start":"2024-07-22T10:31:44.716063Z","end":"2024-07-22T10:31:45.157685Z","steps":["trace[675707751] 'read index received'  (duration: 441.442929ms)","trace[675707751] 'applied index is now lower than readState.Index'  (duration: 178.404µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T10:31:45.157835Z","caller":"traceutil/trace.go:171","msg":"trace[1180722934] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"522.08685ms","start":"2024-07-22T10:31:44.635741Z","end":"2024-07-22T10:31:45.157827Z","steps":["trace[1180722934] 'process raft request'  (duration: 521.77533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.157942Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:44.635728Z","time spent":"522.128852ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4319,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" mod_revision:1250 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" value_size:4248 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-2k5sr\" > >"}
	{"level":"warn","ts":"2024-07-22T10:31:45.15797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.339722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-22T10:31:45.158019Z","caller":"traceutil/trace.go:171","msg":"trace[1916217483] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1261; }","duration":"202.414729ms","start":"2024-07-22T10:31:44.955596Z","end":"2024-07-22T10:31:45.15801Z","steps":["trace[1916217483] 'agreement among raft nodes before linearized reading'  (duration: 202.296487ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.158147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"442.083043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a\" ","response":"range_response_count:1 size:4206"}
	{"level":"info","ts":"2024-07-22T10:31:45.158163Z","caller":"traceutil/trace.go:171","msg":"trace[1176231384] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a; range_end:; response_count:1; response_revision:1261; }","duration":"442.117869ms","start":"2024-07-22T10:31:44.716039Z","end":"2024-07-22T10:31:45.158157Z","steps":["trace[1176231384] 'agreement among raft nodes before linearized reading'  (duration: 442.066797ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:31:45.158183Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:31:44.716027Z","time spent":"442.150547ms","remote":"127.0.0.1:59168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4229,"request content":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a\" "}
	{"level":"info","ts":"2024-07-22T10:32:20.387611Z","caller":"traceutil/trace.go:171","msg":"trace[1904002428] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"144.736081ms","start":"2024-07-22T10:32:20.242852Z","end":"2024-07-22T10:32:20.387588Z","steps":["trace[1904002428] 'process raft request'  (duration: 144.644087ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:32:21.069695Z","caller":"traceutil/trace.go:171","msg":"trace[1024143635] transaction","detail":"{read_only:false; response_revision:1531; number_of_response:1; }","duration":"314.184445ms","start":"2024-07-22T10:32:20.755493Z","end":"2024-07-22T10:32:21.069678Z","steps":["trace[1024143635] 'process raft request'  (duration: 314.091831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:32:21.069865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:32:20.755469Z","time spent":"314.285965ms","remote":"127.0.0.1:59262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1507 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-07-22T10:32:21.070207Z","caller":"traceutil/trace.go:171","msg":"trace[222923741] linearizableReadLoop","detail":"{readStateIndex:1581; appliedIndex:1581; }","duration":"236.359642ms","start":"2024-07-22T10:32:20.833837Z","end":"2024-07-22T10:32:21.070196Z","steps":["trace[222923741] 'read index received'  (duration: 235.670839ms)","trace[222923741] 'applied index is now lower than readState.Index'  (duration: 686.847µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T10:32:21.070373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.527529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-22T10:32:21.070411Z","caller":"traceutil/trace.go:171","msg":"trace[2121093447] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1531; }","duration":"236.592943ms","start":"2024-07-22T10:32:20.833811Z","end":"2024-07-22T10:32:21.070404Z","steps":["trace[2121093447] 'agreement among raft nodes before linearized reading'  (duration: 236.440252ms)"],"step_count":1}
	
	
	==> gcp-auth [8d97614ce321bdd3c0a5f2626302115312b0d38930a1ca04d667bd107517db29] <==
	2024/07/22 10:31:36 GCP Auth Webhook started!
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:37 Ready to marshal response ...
	2024/07/22 10:31:37 Ready to write response ...
	2024/07/22 10:31:41 Ready to marshal response ...
	2024/07/22 10:31:41 Ready to write response ...
	2024/07/22 10:31:43 Ready to marshal response ...
	2024/07/22 10:31:43 Ready to write response ...
	2024/07/22 10:31:43 Ready to marshal response ...
	2024/07/22 10:31:43 Ready to write response ...
	2024/07/22 10:31:47 Ready to marshal response ...
	2024/07/22 10:31:47 Ready to write response ...
	2024/07/22 10:31:53 Ready to marshal response ...
	2024/07/22 10:31:53 Ready to write response ...
	2024/07/22 10:32:05 Ready to marshal response ...
	2024/07/22 10:32:05 Ready to write response ...
	2024/07/22 10:32:15 Ready to marshal response ...
	2024/07/22 10:32:15 Ready to write response ...
	2024/07/22 10:32:43 Ready to marshal response ...
	2024/07/22 10:32:43 Ready to write response ...
	2024/07/22 10:34:24 Ready to marshal response ...
	2024/07/22 10:34:24 Ready to write response ...
	
	
	==> kernel <==
	 10:37:30 up 8 min,  0 users,  load average: 0.22, 0.83, 0.62
	Linux addons-362127 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9243bc8ee19c7a6e52cac3020b654481b13ba5e02863bda9c7eb8f933bd3fa7e] <==
	E0722 10:31:59.290880       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0722 10:31:59.291733       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.305921       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.313270       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	E0722 10:31:59.334276       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.42:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.42:443: connect: connection refused
	I0722 10:31:59.572361       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0722 10:31:59.851941       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0722 10:32:00.881473       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0722 10:32:05.379779       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0722 10:32:05.548020       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.23.84"}
	E0722 10:32:09.204952       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0722 10:32:28.896912       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0722 10:32:59.091404       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.091475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.121087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.121133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.144661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.144714       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0722 10:32:59.187436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0722 10:32:59.187478       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0722 10:33:00.125627       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0722 10:33:00.188407       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0722 10:33:00.214289       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0722 10:34:24.407620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.218.100"}
	
	
	==> kube-controller-manager [1cbab7bd85e1c45b81b0efd7edebef5063711bcc40003e385cfb6f934ca225da] <==
	W0722 10:35:09.922035       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:35:09.922146       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:35:17.353045       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:35:17.353166       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:35:35.931872       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:35:35.932022       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:35:48.527189       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:35:48.527248       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:35:59.018927       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:35:59.019077       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:36:02.531626       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:36:02.531744       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:36:20.929588       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:36:20.929783       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:36:27.859154       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:36:27.859289       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:36:32.430852       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:36:32.430940       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:36:59.698235       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:36:59.698286       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:37:03.165465       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:37:03.165619       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0722 10:37:14.590096       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0722 10:37:14.590193       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0722 10:37:29.268022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="32.382µs"
	
	
	==> kube-proxy [1038681b91ded1a2d4021ea99fa938b6f32fedd35dbd401e2bd11648def7d0d4] <==
	I0722 10:30:14.567233       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:30:14.580957       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0722 10:30:14.768579       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:30:14.768620       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:30:14.768634       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:30:14.771299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:30:14.771535       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:30:14.771547       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:30:14.772861       1 config.go:192] "Starting service config controller"
	I0722 10:30:14.772874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:30:14.772907       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:30:14.772910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:30:14.781038       1 config.go:319] "Starting node config controller"
	I0722 10:30:14.781048       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:30:14.873539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:30:14.874131       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:30:14.881410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8095e98c3220c8c62fe67b807e326eb446a2367817da1ba2e256d88b98cfc382] <==
	E0722 10:29:56.356792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 10:29:56.356775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 10:29:56.356892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:29:56.356939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:29:56.356950       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:29:56.356957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:29:56.356288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:56.356998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:29:56.357040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:29:56.357074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:29:57.159886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:29:57.159993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:29:57.188163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:29:57.188244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:29:57.220712       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:29:57.220739       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:29:57.246454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:29:57.246698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:29:57.496497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:57.496589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:29:57.517799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:29:57.517945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:29:57.567863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:29:57.568003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0722 10:29:59.547802       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 10:34:59 addons-362127 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:34:59 addons-362127 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:34:59 addons-362127 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:34:59 addons-362127 kubelet[1279]: I0722 10:34:59.975481    1279 scope.go:117] "RemoveContainer" containerID="5f0a763da12bc2c9c61d9a2837239d3d0895b5c4f90ada6ff6fc48fde05ec432"
	Jul 22 10:34:59 addons-362127 kubelet[1279]: I0722 10:34:59.996387    1279 scope.go:117] "RemoveContainer" containerID="474bd9c4d656b8bd2259842e860e8e0c1f8c33d92d2623ea8e4a13ef1e494066"
	Jul 22 10:35:59 addons-362127 kubelet[1279]: E0722 10:35:59.071250    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:35:59 addons-362127 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:35:59 addons-362127 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:35:59 addons-362127 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:35:59 addons-362127 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:36:59 addons-362127 kubelet[1279]: E0722 10:36:59.070464    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:36:59 addons-362127 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:36:59 addons-362127 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:36:59 addons-362127 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:36:59 addons-362127 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.726279    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-tmp-dir\") pod \"7d0a2a6c-b7cf-488c-97d6-3fb459a706c9\" (UID: \"7d0a2a6c-b7cf-488c-97d6-3fb459a706c9\") "
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.726382    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnwhn\" (UniqueName: \"kubernetes.io/projected/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-kube-api-access-qnwhn\") pod \"7d0a2a6c-b7cf-488c-97d6-3fb459a706c9\" (UID: \"7d0a2a6c-b7cf-488c-97d6-3fb459a706c9\") "
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.726945    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "7d0a2a6c-b7cf-488c-97d6-3fb459a706c9" (UID: "7d0a2a6c-b7cf-488c-97d6-3fb459a706c9"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.729978    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-kube-api-access-qnwhn" (OuterVolumeSpecName: "kube-api-access-qnwhn") pod "7d0a2a6c-b7cf-488c-97d6-3fb459a706c9" (UID: "7d0a2a6c-b7cf-488c-97d6-3fb459a706c9"). InnerVolumeSpecName "kube-api-access-qnwhn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.827296    1279 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-tmp-dir\") on node \"addons-362127\" DevicePath \"\""
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.827468    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qnwhn\" (UniqueName: \"kubernetes.io/projected/7d0a2a6c-b7cf-488c-97d6-3fb459a706c9-kube-api-access-qnwhn\") on node \"addons-362127\" DevicePath \"\""
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.883222    1279 scope.go:117] "RemoveContainer" containerID="af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0"
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.918732    1279 scope.go:117] "RemoveContainer" containerID="af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0"
	Jul 22 10:37:30 addons-362127 kubelet[1279]: E0722 10:37:30.919544    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0\": container with ID starting with af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0 not found: ID does not exist" containerID="af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0"
	Jul 22 10:37:30 addons-362127 kubelet[1279]: I0722 10:37:30.919675    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0"} err="failed to get container status \"af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0\": rpc error: code = NotFound desc = could not find container \"af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0\": container with ID starting with af665d7c09f29bedd86fd338107161609babe3d49d97763449d7a6debd119bd0 not found: ID does not exist"
	
	
	==> storage-provisioner [a023d343932260c127b3307e1f989c331d5447f05f601090ab7113a5cb23a336] <==
	I0722 10:30:20.431247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:30:20.504713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:30:20.504770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:30:20.533459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:30:20.533663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2!
	I0722 10:30:20.538410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af7272b2-74b5-4117-9eb8-d62733289c47", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2 became leader
	I0722 10:30:20.642795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-362127_2d69a6bc-f8dc-402f-8c5b-e2205587b1d2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-362127 -n addons-362127
helpers_test.go:261: (dbg) Run:  kubectl --context addons-362127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (334.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-362127
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-362127: exit status 82 (2m0.443506398s)

                                                
                                                
-- stdout --
	* Stopping node "addons-362127"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-362127" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-362127
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-362127: exit status 11 (21.46263683s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-362127" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-362127
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-362127: exit status 11 (6.143810474s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-362127" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-362127
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-362127: exit status 11 (6.142546043s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-362127" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [75a4e563-bce0-47e4-915c-81066308b3a6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005236712s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-941610 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-941610 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-941610 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-941610 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c847ae5f-f47f-403e-9955-cbebede22ae4] Pending
helpers_test.go:344: "sp-pod" [c847ae5f-f47f-403e-9955-cbebede22ae4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-941610 -n functional-941610
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-07-22 10:46:34.917428939 +0000 UTC m=+1065.644843282
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-941610 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-941610 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-941610/192.168.39.245
Start Time:       Mon, 22 Jul 2024 10:43:34 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph9h9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-ph9h9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  3m    default-scheduler  Successfully assigned default/sp-pod to functional-941610
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-941610 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-941610 logs sp-pod -n default: exit status 1 (64.053671ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-941610 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-941610 -n functional-941610
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 logs -n 25
E0722 10:46:36.611450   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 logs -n 25: (1.426070726s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service        | functional-941610 service                                             | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | --namespace=default --https                                           |                   |         |         |                     |                     |
	|                | --url hello-node                                                      |                   |         |         |                     |                     |
	| service        | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | service hello-node --url                                              |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                      |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh findmnt                                         | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | -T /mount-9p | grep 9p                                                |                   |         |         |                     |                     |
	| service        | functional-941610 service                                             | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | hello-node --url                                                      |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh -- ls                                           | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | -la /mount-9p                                                         |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh sudo cat                                        | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | /etc/test/nested/copy/13098/hosts                                     |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh sudo                                            | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | umount -f /mount-9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-941610                                                  | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh findmnt                                         | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | -T /mount1                                                            |                   |         |         |                     |                     |
	| mount          | -p functional-941610                                                  | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| mount          | -p functional-941610                                                  | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh findmnt                                         | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | -T /mount1                                                            |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh findmnt                                         | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | -T /mount2                                                            |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh findmnt                                         | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | -T /mount3                                                            |                   |         |         |                     |                     |
	| mount          | -p functional-941610                                                  | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | --kill=true                                                           |                   |         |         |                     |                     |
	| image          | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | image ls --format short                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | image ls --format yaml                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| ssh            | functional-941610 ssh pgrep                                           | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC |                     |
	|                | buildkitd                                                             |                   |         |         |                     |                     |
	| image          | functional-941610 image build -t                                      | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | localhost/my-image:functional-941610                                  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                   |         |         |                     |                     |
	| image          | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | image ls --format json                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | image ls --format table                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| update-context | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-941610                                                     | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| image          | functional-941610 image ls                                            | functional-941610 | jenkins | v1.33.1 | 22 Jul 24 10:43 UTC | 22 Jul 24 10:43 UTC |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:43:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:43:41.199573   22085 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:43:41.199719   22085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:41.199729   22085 out.go:304] Setting ErrFile to fd 2...
	I0722 10:43:41.199735   22085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:41.200065   22085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:43:41.200727   22085 out.go:298] Setting JSON to false
	I0722 10:43:41.201862   22085 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1573,"bootTime":1721643448,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:43:41.201935   22085 start.go:139] virtualization: kvm guest
	I0722 10:43:41.204326   22085 out.go:177] * [functional-941610] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0722 10:43:41.206194   22085 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:43:41.206222   22085 notify.go:220] Checking for updates...
	I0722 10:43:41.208700   22085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:43:41.210000   22085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:43:41.211365   22085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:43:41.212603   22085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:43:41.213853   22085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:43:41.215370   22085 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:43:41.215928   22085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:41.215976   22085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:41.231322   22085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41649
	I0722 10:43:41.231616   22085 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:41.232099   22085 main.go:141] libmachine: Using API Version  1
	I0722 10:43:41.232120   22085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:41.232425   22085 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:41.232614   22085 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:41.232845   22085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:43:41.233103   22085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:41.233142   22085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:41.247056   22085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0722 10:43:41.247414   22085 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:41.247828   22085 main.go:141] libmachine: Using API Version  1
	I0722 10:43:41.247846   22085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:41.248116   22085 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:41.248296   22085 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:41.279017   22085 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0722 10:43:41.280135   22085 start.go:297] selected driver: kvm2
	I0722 10:43:41.280149   22085 start.go:901] validating driver "kvm2" against &{Name:functional-941610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-941610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:43:41.280252   22085 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:43:41.282199   22085 out.go:177] 
	W0722 10:43:41.283390   22085 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0722 10:43:41.284526   22085 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.711673110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645195711647288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250735,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ddf5747-16c6-429e-bd06-7089d5245b46 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.712299069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15faa7a8-c1b9-4b2c-b008-a7cea65466ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.712361613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15faa7a8-c1b9-4b2c-b008-a7cea65466ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.712725066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f81cd463704768f62d10898115927772bbb577984a078db208a564b8917fff,PodSandboxId:68c58b8c8b4da28535a297899419a7795e24ffd477f80d638741c53ec0a8432e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721645042656287317,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-k7ctz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 191e5936-c1f0-47ef-963d-12a1c34225bb,},Annotations:map[string]string{io.kubernetes.container.hash: fa3b6bf8,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"c
ontainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeaa886ca694bf4f9d71ee972e5515ed6e4ab941e082a74dbf6683858a9d9b2b,PodSandboxId:22956d7af94913840f91b7db363d2cdc4bd5abc72f9623e1dad7bdd7ef066556,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721645031118734915,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-kpj7q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernet
es.pod.uid: 7cfe4ea9-c4f6-47f7-b54d-6077c5d4973b,},Annotations:map[string]string{io.kubernetes.container.hash: 8822e361,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663b9a59358601206ca002cab1c16f667f7e8ed01a7cf5389b51031485ae176a,PodSandboxId:62e15200da9d2c26dd3298a277e67f84953f0e4783acc2e2aaff65f9535e8b45,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721645029634860307,Labels:map[string]string{io.kubernetes.container.name: kubernet
es-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-f6mzb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 098625cb-2c7d-4837-a506-6f4a3b768007,},Annotations:map[string]string{io.kubernetes.container.hash: 515ecc21,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a17afa0fff256f7794573fc2c37cf0a954100e8beed751aef8754e77d3a21,PodSandboxId:439a5cc95a6ae33c5466d917bba4f86593b117bacef94de64dc446167319b4b3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1721645022152487229,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31b5c600-5f6b-4913-8638-d26b7e466b73,},Annotations:map[string]string{io.kubernetes.container.hash: d451fe43,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc033dc6481fbd22e05931cea3b0cd557ff875e594746e403bf4ac1dace231a,PodSandboxId:7e631e6bdaa4f316ed13c617b1d24f2d504db972425991e0f62df21d27243986,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt
:1721645018002721728,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-l4649,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09010f1f-bbb2-43b5-8d3b-c4498b226b0b,},Annotations:map[string]string{io.kubernetes.container.hash: 65862f4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5df3ef412a74616d36b1d3fd612f87ed53807603bc6bb838d2526cc9ade98,PodSandboxId:f580ac8001641eed2cc4f77928b15bff3173912efee6dfa7ba052858378b12c1,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,C
reatedAt:1721645013215951347,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-5ggcb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f,},Annotations:map[string]string{io.kubernetes.container.hash: de29c480,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39862db06d4c602a34c54ca4ee5c53cc5e2366d6161185029c82d71be9500f9c,PodSandboxId:9089abe5a9a90f6a63799d359c08b9bdef0c3e2886bbf2bf51c93c0364580e77,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172164498179
6025993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c9a2ec438acde0e057a38569d26de16b3d96be57bf1fc886df70a2bab05bc,PodSandboxId:62a12d289d0ed407ba30b80dfab728ae1c7b94b4b0deba946d7db08d287d5214,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644981802519349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc60b6f6c0096d0734d36f9af21886be13b60d431caa122cfd38a89ee5395ba,PodSandboxId:7584b24ea82ac3bc12f4257eb84f4417958bd23fd2f6a50f39b642b89957a052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644981785211152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f650fc9686879493bbf66e41f18e8ade9d5161ff508cb534cd56ea4e5c3c77,PodSandboxId:aac5ec57ad943e99fa517a7016bdac1cdf7ed4083d8f84b23b513324d344d00d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644978198770133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2698e7cd8e68c312ea4df21d35c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: caefb72e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9e4b615825c45a8130568d02f6f1b0f25e60114c130461d53bc17c07e0214b,PodSandboxId:291340864e3273819496c9cb07db6897f6dfb2ee815b8bd229dcd1b3d37554d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644978013322543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc15f99a45b96dfd280ab81f89fd99817e330cb7343168981cb8c9749e8f91d,PodSandboxId:6de68d88f951574d68db5e08b826a1d6bfa3fa65ccb225df44fb6affcd10f12c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76
932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644978000717499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a044c8f12e050a2d045f22c804d1f67857cf1e3940378ece95f34dbabca4224,PodSandboxId:aa5ea59bc6ab86892b33e682605799fe30f8494c78f485a087ca1af5b719b645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644978004726158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b6c9b16bfecf02765f744c60fb831d75cc1ebe3fb827bd4f2e953d5d8bf68c,PodSandboxId:2d5288214adcb2fd134d3773a23a88ed1adcd966a0f2ca18ae294014894bc03c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08b
a382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721644948474657790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62faf85b5f998c5ec5dff797db8fda2b56caaedf6b7d9f5c4f4f3dc81b34ea44,PodSandboxId:68948d206068fe96652798af21e4cbf4fb58b6e886980332abff333ff907e505,Metadata:&ContainerMetadata{Name:kube-proxy
,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721644948197516187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f718ed4fbb292653a4589346f22aebce7d4a9bffaa93b143ffe1904e213a0864,PodSandboxId:d388f5b79c41308caed7656f9d5666d1ee360211efdad7cf0c793b4c0e02e512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageS
pec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721644948163506258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9767b84391c282abd10df61bd7ff3820cfc654bb568203749493be062d26a1,PodSandboxId:ca2fd30eff2de958f6608fbbfe1570c5eeae681f23e3d517504d443075cee929,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image
:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721644944393299215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4651856ff845f45600a572a5d6b1e55f58b602b19744f6003c9788e2c26818d,PodSandboxId:1b47859d43aae3ab0d01775251632cd5c9aa1f8226859dfd07fb0969e769486b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{I
mage:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721644944373310150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50bd2981a3083088a47d287cddf9ee201d93d8856c9116589fadfb07599ce977,PodSandboxId:ddd65b3c96393029339c79da973437fbf6871b3ea336e96a9ef13c52483a6177,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0
62788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721644944362176928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15faa7a8-c1b9-4b2c-b008-a7cea65466ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.758603488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3e1d0cb-c9d7-49bf-b22b-63386e44f865 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.758678584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3e1d0cb-c9d7-49bf-b22b-63386e44f865 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.760197704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c72cf5cd-614c-4b8c-bde1-04c2990feb17 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.760950073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645195760864370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250735,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c72cf5cd-614c-4b8c-bde1-04c2990feb17 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.761418443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94c7a1c2-5397-44e7-bd2c-f0e664e72401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.761521894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94c7a1c2-5397-44e7-bd2c-f0e664e72401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.761925658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f81cd463704768f62d10898115927772bbb577984a078db208a564b8917fff,PodSandboxId:68c58b8c8b4da28535a297899419a7795e24ffd477f80d638741c53ec0a8432e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721645042656287317,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-k7ctz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 191e5936-c1f0-47ef-963d-12a1c34225bb,},Annotations:map[string]string{io.kubernetes.container.hash: fa3b6bf8,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"c
ontainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeaa886ca694bf4f9d71ee972e5515ed6e4ab941e082a74dbf6683858a9d9b2b,PodSandboxId:22956d7af94913840f91b7db363d2cdc4bd5abc72f9623e1dad7bdd7ef066556,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721645031118734915,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-kpj7q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernet
es.pod.uid: 7cfe4ea9-c4f6-47f7-b54d-6077c5d4973b,},Annotations:map[string]string{io.kubernetes.container.hash: 8822e361,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663b9a59358601206ca002cab1c16f667f7e8ed01a7cf5389b51031485ae176a,PodSandboxId:62e15200da9d2c26dd3298a277e67f84953f0e4783acc2e2aaff65f9535e8b45,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721645029634860307,Labels:map[string]string{io.kubernetes.container.name: kubernet
es-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-f6mzb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 098625cb-2c7d-4837-a506-6f4a3b768007,},Annotations:map[string]string{io.kubernetes.container.hash: 515ecc21,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a17afa0fff256f7794573fc2c37cf0a954100e8beed751aef8754e77d3a21,PodSandboxId:439a5cc95a6ae33c5466d917bba4f86593b117bacef94de64dc446167319b4b3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1721645022152487229,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31b5c600-5f6b-4913-8638-d26b7e466b73,},Annotations:map[string]string{io.kubernetes.container.hash: d451fe43,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc033dc6481fbd22e05931cea3b0cd557ff875e594746e403bf4ac1dace231a,PodSandboxId:7e631e6bdaa4f316ed13c617b1d24f2d504db972425991e0f62df21d27243986,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt
:1721645018002721728,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-l4649,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09010f1f-bbb2-43b5-8d3b-c4498b226b0b,},Annotations:map[string]string{io.kubernetes.container.hash: 65862f4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5df3ef412a74616d36b1d3fd612f87ed53807603bc6bb838d2526cc9ade98,PodSandboxId:f580ac8001641eed2cc4f77928b15bff3173912efee6dfa7ba052858378b12c1,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,C
reatedAt:1721645013215951347,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-5ggcb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f,},Annotations:map[string]string{io.kubernetes.container.hash: de29c480,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39862db06d4c602a34c54ca4ee5c53cc5e2366d6161185029c82d71be9500f9c,PodSandboxId:9089abe5a9a90f6a63799d359c08b9bdef0c3e2886bbf2bf51c93c0364580e77,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172164498179
6025993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c9a2ec438acde0e057a38569d26de16b3d96be57bf1fc886df70a2bab05bc,PodSandboxId:62a12d289d0ed407ba30b80dfab728ae1c7b94b4b0deba946d7db08d287d5214,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644981802519349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc60b6f6c0096d0734d36f9af21886be13b60d431caa122cfd38a89ee5395ba,PodSandboxId:7584b24ea82ac3bc12f4257eb84f4417958bd23fd2f6a50f39b642b89957a052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644981785211152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f650fc9686879493bbf66e41f18e8ade9d5161ff508cb534cd56ea4e5c3c77,PodSandboxId:aac5ec57ad943e99fa517a7016bdac1cdf7ed4083d8f84b23b513324d344d00d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644978198770133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2698e7cd8e68c312ea4df21d35c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: caefb72e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9e4b615825c45a8130568d02f6f1b0f25e60114c130461d53bc17c07e0214b,PodSandboxId:291340864e3273819496c9cb07db6897f6dfb2ee815b8bd229dcd1b3d37554d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644978013322543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc15f99a45b96dfd280ab81f89fd99817e330cb7343168981cb8c9749e8f91d,PodSandboxId:6de68d88f951574d68db5e08b826a1d6bfa3fa65ccb225df44fb6affcd10f12c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76
932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644978000717499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a044c8f12e050a2d045f22c804d1f67857cf1e3940378ece95f34dbabca4224,PodSandboxId:aa5ea59bc6ab86892b33e682605799fe30f8494c78f485a087ca1af5b719b645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644978004726158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b6c9b16bfecf02765f744c60fb831d75cc1ebe3fb827bd4f2e953d5d8bf68c,PodSandboxId:2d5288214adcb2fd134d3773a23a88ed1adcd966a0f2ca18ae294014894bc03c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08b
a382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721644948474657790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62faf85b5f998c5ec5dff797db8fda2b56caaedf6b7d9f5c4f4f3dc81b34ea44,PodSandboxId:68948d206068fe96652798af21e4cbf4fb58b6e886980332abff333ff907e505,Metadata:&ContainerMetadata{Name:kube-proxy
,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721644948197516187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f718ed4fbb292653a4589346f22aebce7d4a9bffaa93b143ffe1904e213a0864,PodSandboxId:d388f5b79c41308caed7656f9d5666d1ee360211efdad7cf0c793b4c0e02e512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageS
pec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721644948163506258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9767b84391c282abd10df61bd7ff3820cfc654bb568203749493be062d26a1,PodSandboxId:ca2fd30eff2de958f6608fbbfe1570c5eeae681f23e3d517504d443075cee929,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image
:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721644944393299215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4651856ff845f45600a572a5d6b1e55f58b602b19744f6003c9788e2c26818d,PodSandboxId:1b47859d43aae3ab0d01775251632cd5c9aa1f8226859dfd07fb0969e769486b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{I
mage:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721644944373310150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50bd2981a3083088a47d287cddf9ee201d93d8856c9116589fadfb07599ce977,PodSandboxId:ddd65b3c96393029339c79da973437fbf6871b3ea336e96a9ef13c52483a6177,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0
62788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721644944362176928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94c7a1c2-5397-44e7-bd2c-f0e664e72401 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.798296802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=713fba02-280e-4213-a16c-8a1d987cbc21 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.798562905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=713fba02-280e-4213-a16c-8a1d987cbc21 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.800133934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=398640cc-7986-43b5-9f85-b99623ebe383 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.801030644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645195800988184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250735,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=398640cc-7986-43b5-9f85-b99623ebe383 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.801794847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e9f6cec-e085-46f6-91f4-ae36aeb1a0a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.801853615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e9f6cec-e085-46f6-91f4-ae36aeb1a0a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.802264883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f81cd463704768f62d10898115927772bbb577984a078db208a564b8917fff,PodSandboxId:68c58b8c8b4da28535a297899419a7795e24ffd477f80d638741c53ec0a8432e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721645042656287317,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-k7ctz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 191e5936-c1f0-47ef-963d-12a1c34225bb,},Annotations:map[string]string{io.kubernetes.container.hash: fa3b6bf8,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"c
ontainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeaa886ca694bf4f9d71ee972e5515ed6e4ab941e082a74dbf6683858a9d9b2b,PodSandboxId:22956d7af94913840f91b7db363d2cdc4bd5abc72f9623e1dad7bdd7ef066556,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721645031118734915,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-kpj7q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernet
es.pod.uid: 7cfe4ea9-c4f6-47f7-b54d-6077c5d4973b,},Annotations:map[string]string{io.kubernetes.container.hash: 8822e361,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663b9a59358601206ca002cab1c16f667f7e8ed01a7cf5389b51031485ae176a,PodSandboxId:62e15200da9d2c26dd3298a277e67f84953f0e4783acc2e2aaff65f9535e8b45,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721645029634860307,Labels:map[string]string{io.kubernetes.container.name: kubernet
es-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-f6mzb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 098625cb-2c7d-4837-a506-6f4a3b768007,},Annotations:map[string]string{io.kubernetes.container.hash: 515ecc21,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a17afa0fff256f7794573fc2c37cf0a954100e8beed751aef8754e77d3a21,PodSandboxId:439a5cc95a6ae33c5466d917bba4f86593b117bacef94de64dc446167319b4b3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1721645022152487229,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31b5c600-5f6b-4913-8638-d26b7e466b73,},Annotations:map[string]string{io.kubernetes.container.hash: d451fe43,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc033dc6481fbd22e05931cea3b0cd557ff875e594746e403bf4ac1dace231a,PodSandboxId:7e631e6bdaa4f316ed13c617b1d24f2d504db972425991e0f62df21d27243986,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt
:1721645018002721728,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-l4649,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09010f1f-bbb2-43b5-8d3b-c4498b226b0b,},Annotations:map[string]string{io.kubernetes.container.hash: 65862f4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5df3ef412a74616d36b1d3fd612f87ed53807603bc6bb838d2526cc9ade98,PodSandboxId:f580ac8001641eed2cc4f77928b15bff3173912efee6dfa7ba052858378b12c1,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,C
reatedAt:1721645013215951347,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-5ggcb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f,},Annotations:map[string]string{io.kubernetes.container.hash: de29c480,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39862db06d4c602a34c54ca4ee5c53cc5e2366d6161185029c82d71be9500f9c,PodSandboxId:9089abe5a9a90f6a63799d359c08b9bdef0c3e2886bbf2bf51c93c0364580e77,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172164498179
6025993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c9a2ec438acde0e057a38569d26de16b3d96be57bf1fc886df70a2bab05bc,PodSandboxId:62a12d289d0ed407ba30b80dfab728ae1c7b94b4b0deba946d7db08d287d5214,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644981802519349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc60b6f6c0096d0734d36f9af21886be13b60d431caa122cfd38a89ee5395ba,PodSandboxId:7584b24ea82ac3bc12f4257eb84f4417958bd23fd2f6a50f39b642b89957a052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644981785211152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f650fc9686879493bbf66e41f18e8ade9d5161ff508cb534cd56ea4e5c3c77,PodSandboxId:aac5ec57ad943e99fa517a7016bdac1cdf7ed4083d8f84b23b513324d344d00d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644978198770133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2698e7cd8e68c312ea4df21d35c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: caefb72e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9e4b615825c45a8130568d02f6f1b0f25e60114c130461d53bc17c07e0214b,PodSandboxId:291340864e3273819496c9cb07db6897f6dfb2ee815b8bd229dcd1b3d37554d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644978013322543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc15f99a45b96dfd280ab81f89fd99817e330cb7343168981cb8c9749e8f91d,PodSandboxId:6de68d88f951574d68db5e08b826a1d6bfa3fa65ccb225df44fb6affcd10f12c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76
932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644978000717499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a044c8f12e050a2d045f22c804d1f67857cf1e3940378ece95f34dbabca4224,PodSandboxId:aa5ea59bc6ab86892b33e682605799fe30f8494c78f485a087ca1af5b719b645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644978004726158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b6c9b16bfecf02765f744c60fb831d75cc1ebe3fb827bd4f2e953d5d8bf68c,PodSandboxId:2d5288214adcb2fd134d3773a23a88ed1adcd966a0f2ca18ae294014894bc03c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08b
a382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721644948474657790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62faf85b5f998c5ec5dff797db8fda2b56caaedf6b7d9f5c4f4f3dc81b34ea44,PodSandboxId:68948d206068fe96652798af21e4cbf4fb58b6e886980332abff333ff907e505,Metadata:&ContainerMetadata{Name:kube-proxy
,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721644948197516187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f718ed4fbb292653a4589346f22aebce7d4a9bffaa93b143ffe1904e213a0864,PodSandboxId:d388f5b79c41308caed7656f9d5666d1ee360211efdad7cf0c793b4c0e02e512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageS
pec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721644948163506258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9767b84391c282abd10df61bd7ff3820cfc654bb568203749493be062d26a1,PodSandboxId:ca2fd30eff2de958f6608fbbfe1570c5eeae681f23e3d517504d443075cee929,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image
:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721644944393299215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4651856ff845f45600a572a5d6b1e55f58b602b19744f6003c9788e2c26818d,PodSandboxId:1b47859d43aae3ab0d01775251632cd5c9aa1f8226859dfd07fb0969e769486b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{I
mage:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721644944373310150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50bd2981a3083088a47d287cddf9ee201d93d8856c9116589fadfb07599ce977,PodSandboxId:ddd65b3c96393029339c79da973437fbf6871b3ea336e96a9ef13c52483a6177,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0
62788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721644944362176928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e9f6cec-e085-46f6-91f4-ae36aeb1a0a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.839084808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5737e15e-bea9-4591-82e1-ce310205fd43 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.839202212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5737e15e-bea9-4591-82e1-ce310205fd43 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.840190859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db030584-a187-412f-bf6c-5e7a2115f011 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.840947859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645195840850329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250735,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db030584-a187-412f-bf6c-5e7a2115f011 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.841596719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=758c5a02-ecea-4f3e-b41e-acdd4b8473f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.841649348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=758c5a02-ecea-4f3e-b41e-acdd4b8473f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:46:35 functional-941610 crio[4241]: time="2024-07-22 10:46:35.842069030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f81cd463704768f62d10898115927772bbb577984a078db208a564b8917fff,PodSandboxId:68c58b8c8b4da28535a297899419a7795e24ffd477f80d638741c53ec0a8432e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721645042656287317,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-k7ctz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 191e5936-c1f0-47ef-963d-12a1c34225bb,},Annotations:map[string]string{io.kubernetes.container.hash: fa3b6bf8,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"c
ontainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeaa886ca694bf4f9d71ee972e5515ed6e4ab941e082a74dbf6683858a9d9b2b,PodSandboxId:22956d7af94913840f91b7db363d2cdc4bd5abc72f9623e1dad7bdd7ef066556,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721645031118734915,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-kpj7q,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernet
es.pod.uid: 7cfe4ea9-c4f6-47f7-b54d-6077c5d4973b,},Annotations:map[string]string{io.kubernetes.container.hash: 8822e361,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663b9a59358601206ca002cab1c16f667f7e8ed01a7cf5389b51031485ae176a,PodSandboxId:62e15200da9d2c26dd3298a277e67f84953f0e4783acc2e2aaff65f9535e8b45,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721645029634860307,Labels:map[string]string{io.kubernetes.container.name: kubernet
es-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-f6mzb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 098625cb-2c7d-4837-a506-6f4a3b768007,},Annotations:map[string]string{io.kubernetes.container.hash: 515ecc21,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a17afa0fff256f7794573fc2c37cf0a954100e8beed751aef8754e77d3a21,PodSandboxId:439a5cc95a6ae33c5466d917bba4f86593b117bacef94de64dc446167319b4b3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1721645022152487229,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31b5c600-5f6b-4913-8638-d26b7e466b73,},Annotations:map[string]string{io.kubernetes.container.hash: d451fe43,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc033dc6481fbd22e05931cea3b0cd557ff875e594746e403bf4ac1dace231a,PodSandboxId:7e631e6bdaa4f316ed13c617b1d24f2d504db972425991e0f62df21d27243986,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt
:1721645018002721728,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-l4649,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09010f1f-bbb2-43b5-8d3b-c4498b226b0b,},Annotations:map[string]string{io.kubernetes.container.hash: 65862f4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5df3ef412a74616d36b1d3fd612f87ed53807603bc6bb838d2526cc9ade98,PodSandboxId:f580ac8001641eed2cc4f77928b15bff3173912efee6dfa7ba052858378b12c1,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,C
reatedAt:1721645013215951347,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-5ggcb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f,},Annotations:map[string]string{io.kubernetes.container.hash: de29c480,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39862db06d4c602a34c54ca4ee5c53cc5e2366d6161185029c82d71be9500f9c,PodSandboxId:9089abe5a9a90f6a63799d359c08b9bdef0c3e2886bbf2bf51c93c0364580e77,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:172164498179
6025993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c9a2ec438acde0e057a38569d26de16b3d96be57bf1fc886df70a2bab05bc,PodSandboxId:62a12d289d0ed407ba30b80dfab728ae1c7b94b4b0deba946d7db08d287d5214,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721644981802519349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc60b6f6c0096d0734d36f9af21886be13b60d431caa122cfd38a89ee5395ba,PodSandboxId:7584b24ea82ac3bc12f4257eb84f4417958bd23fd2f6a50f39b642b89957a052,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721644981785211152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f650fc9686879493bbf66e41f18e8ade9d5161ff508cb534cd56ea4e5c3c77,PodSandboxId:aac5ec57ad943e99fa517a7016bdac1cdf7ed4083d8f84b23b513324d344d00d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721644978198770133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2698e7cd8e68c312ea4df21d35c4e1,},Annotations:map[string]string{io.kubernetes.container.hash: caefb72e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9e4b615825c45a8130568d02f6f1b0f25e60114c130461d53bc17c07e0214b,PodSandboxId:291340864e3273819496c9cb07db6897f6dfb2ee815b8bd229dcd1b3d37554d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721644978013322543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfc15f99a45b96dfd280ab81f89fd99817e330cb7343168981cb8c9749e8f91d,PodSandboxId:6de68d88f951574d68db5e08b826a1d6bfa3fa65ccb225df44fb6affcd10f12c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76
932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721644978000717499,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a044c8f12e050a2d045f22c804d1f67857cf1e3940378ece95f34dbabca4224,PodSandboxId:aa5ea59bc6ab86892b33e682605799fe30f8494c78f485a087ca1af5b719b645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721644978004726158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b6c9b16bfecf02765f744c60fb831d75cc1ebe3fb827bd4f2e953d5d8bf68c,PodSandboxId:2d5288214adcb2fd134d3773a23a88ed1adcd966a0f2ca18ae294014894bc03c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08b
a382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721644948474657790,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fngn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c09a06-0d2e-4cec-83e4-5eca270d8ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 118ba7e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62faf85b5f998c5ec5dff797db8fda2b56caaedf6b7d9f5c4f4f3dc81b34ea44,PodSandboxId:68948d206068fe96652798af21e4cbf4fb58b6e886980332abff333ff907e505,Metadata:&ContainerMetadata{Name:kube-proxy
,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721644948197516187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xzdwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730292a9-e2c0-4ce3-85e2-71f5d862decb,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaf551,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f718ed4fbb292653a4589346f22aebce7d4a9bffaa93b143ffe1904e213a0864,PodSandboxId:d388f5b79c41308caed7656f9d5666d1ee360211efdad7cf0c793b4c0e02e512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageS
pec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721644948163506258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75a4e563-bce0-47e4-915c-81066308b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: d9415f97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9767b84391c282abd10df61bd7ff3820cfc654bb568203749493be062d26a1,PodSandboxId:ca2fd30eff2de958f6608fbbfe1570c5eeae681f23e3d517504d443075cee929,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image
:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721644944393299215,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092815b69c8f520d1aff18dde384d22e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4651856ff845f45600a572a5d6b1e55f58b602b19744f6003c9788e2c26818d,PodSandboxId:1b47859d43aae3ab0d01775251632cd5c9aa1f8226859dfd07fb0969e769486b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{I
mage:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721644944373310150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b30d5bc316a0745e75655ec7f1735df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50bd2981a3083088a47d287cddf9ee201d93d8856c9116589fadfb07599ce977,PodSandboxId:ddd65b3c96393029339c79da973437fbf6871b3ea336e96a9ef13c52483a6177,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f0
62788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721644944362176928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-941610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c17143d199da830b3325fd2184865,},Annotations:map[string]string{io.kubernetes.container.hash: 72e49f79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=758c5a02-ecea-4f3e-b41e-acdd4b8473f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a6f81cd463704       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago       Running             mysql                       0                   68c58b8c8b4da       mysql-64454c8b5c-k7ctz
	aeaa886ca694b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   2 minutes ago       Running             dashboard-metrics-scraper   0                   22956d7af9491       dashboard-metrics-scraper-b5fc48f67-kpj7q
	663b9a5935860       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   62e15200da9d2       kubernetes-dashboard-779776cb65-f6mzb
	e72a17afa0fff       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago       Exited              mount-munger                0                   439a5cc95a6ae       busybox-mount
	5fc033dc6481f       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 2 minutes ago       Running             echoserver                  0                   7e631e6bdaa4f       hello-node-6d85cfcfd8-l4649
	c0e5df3ef412a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   f580ac8001641       hello-node-connect-57b4589c47-5ggcb
	028c9a2ec438a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         2                   62a12d289d0ed       storage-provisioner
	39862db06d4c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 3 minutes ago       Running             coredns                     2                   9089abe5a9a90       coredns-7db6d8ff4d-6fngn
	3cc60b6f6c009       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 3 minutes ago       Running             kube-proxy                  2                   7584b24ea82ac       kube-proxy-xzdwr
	d3f650fc96868       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                 3 minutes ago       Running             kube-apiserver              0                   aac5ec57ad943       kube-apiserver-functional-941610
	5b9e4b615825c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 3 minutes ago       Running             etcd                        2                   291340864e327       etcd-functional-941610
	3a044c8f12e05       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 3 minutes ago       Running             kube-scheduler              2                   aa5ea59bc6ab8       kube-scheduler-functional-941610
	dfc15f99a45b9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 3 minutes ago       Running             kube-controller-manager     2                   6de68d88f9515       kube-controller-manager-functional-941610
	57b6c9b16bfec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 4 minutes ago       Exited              coredns                     1                   2d5288214adcb       coredns-7db6d8ff4d-6fngn
	62faf85b5f998       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 4 minutes ago       Exited              kube-proxy                  1                   68948d206068f       kube-proxy-xzdwr
	f718ed4fbb292       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         1                   d388f5b79c413       storage-provisioner
	1d9767b84391c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 4 minutes ago       Exited              kube-controller-manager     1                   ca2fd30eff2de       kube-controller-manager-functional-941610
	c4651856ff845       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 4 minutes ago       Exited              kube-scheduler              1                   1b47859d43aae       kube-scheduler-functional-941610
	50bd2981a3083       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago       Exited              etcd                        1                   ddd65b3c96393       etcd-functional-941610
	
	
	==> coredns [39862db06d4c602a34c54ca4ee5c53cc5e2366d6161185029c82d71be9500f9c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58097 - 46747 "HINFO IN 5199932908478790710.7873111082425992775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010158805s
	
	
	==> coredns [57b6c9b16bfecf02765f744c60fb831d75cc1ebe3fb827bd4f2e953d5d8bf68c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53411 - 54575 "HINFO IN 2274980487446807962.7157556978308014736. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008225201s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-941610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-941610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=functional-941610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_41_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-941610
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:46:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:44:32 +0000   Mon, 22 Jul 2024 10:41:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:44:32 +0000   Mon, 22 Jul 2024 10:41:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:44:32 +0000   Mon, 22 Jul 2024 10:41:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:44:32 +0000   Mon, 22 Jul 2024 10:41:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    functional-941610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4f1225d83704884aabb50f50bbc82c6
	  System UUID:                d4f1225d-8370-4884-aabb-50f50bbc82c6
	  Boot ID:                    d819f017-eb42-4054-8ebf-4c5ae1147b9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-l4649                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     hello-node-connect-57b4589c47-5ggcb          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     mysql-64454c8b5c-k7ctz                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    2m48s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-7db6d8ff4d-6fngn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m27s
	  kube-system                 etcd-functional-941610                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-apiserver-functional-941610             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-functional-941610    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-proxy-xzdwr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-functional-941610             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-kpj7q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-f6mzb        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 3m34s                  kube-proxy       
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node functional-941610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node functional-941610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node functional-941610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node functional-941610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node functional-941610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node functional-941610 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m40s                  kubelet          Node functional-941610 status is now: NodeReady
	  Normal  RegisteredNode           4m29s                  node-controller  Node functional-941610 event: Registered Node functional-941610 in Controller
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node functional-941610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node functional-941610 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node functional-941610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                  node-controller  Node functional-941610 event: Registered Node functional-941610 in Controller
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m39s)  kubelet          Node functional-941610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m39s)  kubelet          Node functional-941610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m39s)  kubelet          Node functional-941610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node functional-941610 event: Registered Node functional-941610 in Controller
	
	
	==> dmesg <==
	[  +0.180285] systemd-fstab-generator[2306]: Ignoring "noauto" option for root device
	[  +0.135835] systemd-fstab-generator[2318]: Ignoring "noauto" option for root device
	[  +0.252377] systemd-fstab-generator[2346]: Ignoring "noauto" option for root device
	[  +0.695133] systemd-fstab-generator[2472]: Ignoring "noauto" option for root device
	[  +1.902876] systemd-fstab-generator[2598]: Ignoring "noauto" option for root device
	[  +4.559811] kauditd_printk_skb: 184 callbacks suppressed
	[  +8.273614] systemd-fstab-generator[3340]: Ignoring "noauto" option for root device
	[  +0.092251] kauditd_printk_skb: 35 callbacks suppressed
	[ +16.948230] systemd-fstab-generator[4161]: Ignoring "noauto" option for root device
	[  +0.073806] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.054537] systemd-fstab-generator[4173]: Ignoring "noauto" option for root device
	[  +0.158480] systemd-fstab-generator[4187]: Ignoring "noauto" option for root device
	[  +0.146114] systemd-fstab-generator[4200]: Ignoring "noauto" option for root device
	[  +0.269787] systemd-fstab-generator[4227]: Ignoring "noauto" option for root device
	[  +0.749923] systemd-fstab-generator[4350]: Ignoring "noauto" option for root device
	[  +2.469309] systemd-fstab-generator[4813]: Ignoring "noauto" option for root device
	[Jul22 10:43] kauditd_printk_skb: 231 callbacks suppressed
	[ +12.343226] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.983731] systemd-fstab-generator[5351]: Ignoring "noauto" option for root device
	[  +6.525776] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.058573] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.458017] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.002598] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.316340] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.042879] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [50bd2981a3083088a47d287cddf9ee201d93d8856c9116589fadfb07599ce977] <==
	{"level":"info","ts":"2024-07-22T10:42:24.674362Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T10:42:26.255103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T10:42:26.25515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T10:42:26.255195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgPreVoteResp from c66b2a9605a64cb6 at term 2"}
	{"level":"info","ts":"2024-07-22T10:42:26.255208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T10:42:26.255214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgVoteResp from c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-07-22T10:42:26.255222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T10:42:26.255229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c66b2a9605a64cb6 elected leader c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-07-22T10:42:26.259672Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c66b2a9605a64cb6","local-member-attributes":"{Name:functional-941610 ClientURLs:[https://192.168.39.245:2379]}","request-path":"/0/members/c66b2a9605a64cb6/attributes","cluster-id":"8f5341249654324","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T10:42:26.259716Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:42:26.260024Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T10:42:26.260135Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T10:42:26.260118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:42:26.261657Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T10:42:26.262386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.245:2379"}
	{"level":"info","ts":"2024-07-22T10:42:47.231107Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T10:42:47.231152Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-941610","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"]}
	{"level":"warn","ts":"2024-07-22T10:42:47.231212Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.245:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:42:47.231235Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.245:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:42:47.231803Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:42:47.231936Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:42:47.296472Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c66b2a9605a64cb6","current-leader-member-id":"c66b2a9605a64cb6"}
	{"level":"info","ts":"2024-07-22T10:42:47.29952Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-07-22T10:42:47.299838Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-07-22T10:42:47.299924Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-941610","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"]}
	
	
	==> etcd [5b9e4b615825c45a8130568d02f6f1b0f25e60114c130461d53bc17c07e0214b] <==
	{"level":"info","ts":"2024-07-22T10:42:59.82061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became leader at term 4"}
	{"level":"info","ts":"2024-07-22T10:42:59.820617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c66b2a9605a64cb6 elected leader c66b2a9605a64cb6 at term 4"}
	{"level":"info","ts":"2024-07-22T10:42:59.822148Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c66b2a9605a64cb6","local-member-attributes":"{Name:functional-941610 ClientURLs:[https://192.168.39.245:2379]}","request-path":"/0/members/c66b2a9605a64cb6/attributes","cluster-id":"8f5341249654324","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T10:42:59.822273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:42:59.822423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:42:59.822668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T10:42:59.82268Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T10:42:59.824264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.245:2379"}
	{"level":"info","ts":"2024-07-22T10:42:59.824343Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-22T10:43:48.914584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.740735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14774"}
	{"level":"info","ts":"2024-07-22T10:43:48.914669Z","caller":"traceutil/trace.go:171","msg":"trace[219382439] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:817; }","duration":"254.926357ms","start":"2024-07-22T10:43:48.659721Z","end":"2024-07-22T10:43:48.914647Z","steps":["trace[219382439] 'range keys from in-memory index tree'  (duration: 254.586184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:43:48.914753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"346.287594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2024-07-22T10:43:48.91479Z","caller":"traceutil/trace.go:171","msg":"trace[82458324] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:817; }","duration":"346.359ms","start":"2024-07-22T10:43:48.568422Z","end":"2024-07-22T10:43:48.914781Z","steps":["trace[82458324] 'range keys from in-memory index tree'  (duration: 346.187514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:43:48.914815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:43:48.568405Z","time spent":"346.399247ms","remote":"127.0.0.1:50998","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":194,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"info","ts":"2024-07-22T10:44:02.015006Z","caller":"traceutil/trace.go:171","msg":"trace[1145654370] linearizableReadLoop","detail":"{readStateIndex:907; appliedIndex:906; }","duration":"355.430988ms","start":"2024-07-22T10:44:01.65953Z","end":"2024-07-22T10:44:02.014961Z","steps":["trace[1145654370] 'read index received'  (duration: 355.274267ms)","trace[1145654370] 'applied index is now lower than readState.Index'  (duration: 153.98µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T10:44:02.015242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.637952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14774"}
	{"level":"info","ts":"2024-07-22T10:44:02.015324Z","caller":"traceutil/trace.go:171","msg":"trace[1399871869] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:846; }","duration":"355.810145ms","start":"2024-07-22T10:44:01.659505Z","end":"2024-07-22T10:44:02.015315Z","steps":["trace[1399871869] 'agreement among raft nodes before linearized reading'  (duration: 355.559891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:44:02.015357Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:44:01.659491Z","time spent":"355.85467ms","remote":"127.0.0.1:50970","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":14797,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-22T10:44:02.015238Z","caller":"traceutil/trace.go:171","msg":"trace[1052828148] transaction","detail":"{read_only:false; response_revision:846; number_of_response:1; }","duration":"410.369228ms","start":"2024-07-22T10:44:01.604854Z","end":"2024-07-22T10:44:02.015224Z","steps":["trace[1052828148] 'process raft request'  (duration: 409.85374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:44:02.015972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:44:01.604839Z","time spent":"410.570148ms","remote":"127.0.0.1:50962","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:845 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-22T10:44:03.90321Z","caller":"traceutil/trace.go:171","msg":"trace[1658461830] transaction","detail":"{read_only:false; response_revision:854; number_of_response:1; }","duration":"188.918059ms","start":"2024-07-22T10:44:03.714277Z","end":"2024-07-22T10:44:03.903195Z","steps":["trace[1658461830] 'process raft request'  (duration: 188.775265ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:44:06.151059Z","caller":"traceutil/trace.go:171","msg":"trace[121777124] transaction","detail":"{read_only:false; response_revision:860; number_of_response:1; }","duration":"116.884543ms","start":"2024-07-22T10:44:06.034154Z","end":"2024-07-22T10:44:06.151038Z","steps":["trace[121777124] 'process raft request'  (duration: 116.682672ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:44:08.309314Z","caller":"traceutil/trace.go:171","msg":"trace[630281790] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"150.798534ms","start":"2024-07-22T10:44:08.158501Z","end":"2024-07-22T10:44:08.3093Z","steps":["trace[630281790] 'process raft request'  (duration: 150.682822ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:44:10.469307Z","caller":"traceutil/trace.go:171","msg":"trace[1573706278] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"153.532329ms","start":"2024-07-22T10:44:10.315758Z","end":"2024-07-22T10:44:10.46929Z","steps":["trace[1573706278] 'process raft request'  (duration: 153.398153ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T10:44:16.646007Z","caller":"traceutil/trace.go:171","msg":"trace[375984608] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"144.651761ms","start":"2024-07-22T10:44:16.501336Z","end":"2024-07-22T10:44:16.645987Z","steps":["trace[375984608] 'process raft request'  (duration: 144.414133ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:46:36 up 5 min,  0 users,  load average: 0.39, 0.62, 0.31
	Linux functional-941610 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d3f650fc9686879493bbf66e41f18e8ade9d5161ff508cb534cd56ea4e5c3c77] <==
	I0722 10:43:01.116374       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:43:01.116415       1 cache.go:39] Caches are synced for autoregister controller
	I0722 10:43:01.116473       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:43:01.116497       1 policy_source.go:224] refreshing policies
	I0722 10:43:01.113154       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0722 10:43:01.124146       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 10:43:01.205172       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:43:02.008999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 10:43:02.680956       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 10:43:02.719410       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 10:43:02.759052       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 10:43:02.787413       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 10:43:02.793716       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 10:43:13.948432       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:43:14.139003       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:43:24.470326       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.2.19"}
	I0722 10:43:28.960410       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0722 10:43:29.071713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.33.230"}
	I0722 10:43:29.604314       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.61.170"}
	I0722 10:43:43.516043       1 controller.go:615] quota admission added evaluator for: namespaces
	I0722 10:43:43.816145       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.234.131"}
	I0722 10:43:43.839540       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.225.107"}
	I0722 10:43:48.297557       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.68.140"}
	E0722 10:44:09.508564       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8441->192.168.39.1:45502: use of closed network connection
	E0722 10:44:10.714792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8441->192.168.39.1:52040: use of closed network connection
	
	
	==> kube-controller-manager [1d9767b84391c282abd10df61bd7ff3820cfc654bb568203749493be062d26a1] <==
	I0722 10:42:39.825427       1 shared_informer.go:320] Caches are synced for expand
	I0722 10:42:39.825506       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0722 10:42:39.830117       1 shared_informer.go:320] Caches are synced for GC
	I0722 10:42:39.831508       1 shared_informer.go:320] Caches are synced for HPA
	I0722 10:42:39.832763       1 shared_informer.go:320] Caches are synced for deployment
	I0722 10:42:39.834126       1 shared_informer.go:320] Caches are synced for service account
	I0722 10:42:39.835357       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0722 10:42:39.835492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.747µs"
	I0722 10:42:39.839283       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0722 10:42:39.848165       1 shared_informer.go:320] Caches are synced for namespace
	I0722 10:42:39.859966       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0722 10:42:39.862789       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0722 10:42:39.911927       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:42:39.946461       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:42:39.982975       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0722 10:42:39.983062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0722 10:42:39.983113       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0722 10:42:39.983168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0722 10:42:39.995820       1 shared_informer.go:320] Caches are synced for persistent volume
	I0722 10:42:40.034096       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0722 10:42:40.085546       1 shared_informer.go:320] Caches are synced for PV protection
	I0722 10:42:40.089869       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 10:42:40.454538       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:42:40.454559       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 10:42:40.468780       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [dfc15f99a45b96dfd280ab81f89fd99817e330cb7343168981cb8c9749e8f91d] <==
	E0722 10:43:43.670588       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 10:43:43.670028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.972716ms"
	E0722 10:43:43.671170       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 10:43:43.683081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.375235ms"
	E0722 10:43:43.683120       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 10:43:43.683317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.341231ms"
	E0722 10:43:43.683361       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0722 10:43:43.731739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="45.681583ms"
	I0722 10:43:43.733023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="41.707288ms"
	I0722 10:43:43.776498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="43.400472ms"
	I0722 10:43:43.777698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="45.907546ms"
	I0722 10:43:43.814559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="36.754862ms"
	I0722 10:43:43.814641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="37.86426ms"
	I0722 10:43:43.814773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="40.094µs"
	I0722 10:43:43.814845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="149.284µs"
	I0722 10:43:48.396252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="45.69583ms"
	I0722 10:43:48.432207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="35.920771ms"
	I0722 10:43:48.451156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="18.918099ms"
	I0722 10:43:48.451217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="39.81µs"
	I0722 10:43:50.127733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="15.072128ms"
	I0722 10:43:50.128036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="55.578µs"
	I0722 10:43:52.483934       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="32.971346ms"
	I0722 10:43:52.485577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="1.206791ms"
	I0722 10:44:03.920248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="12.980452ms"
	I0722 10:44:03.921561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="106.067µs"
	
	
	==> kube-proxy [3cc60b6f6c0096d0734d36f9af21886be13b60d431caa122cfd38a89ee5395ba] <==
	I0722 10:43:01.998335       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:43:02.010709       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	I0722 10:43:02.089507       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:43:02.089652       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:43:02.089786       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:43:02.094026       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:43:02.094334       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:43:02.095432       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:43:02.097610       1 config.go:192] "Starting service config controller"
	I0722 10:43:02.097680       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:43:02.097758       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:43:02.098227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:43:02.099489       1 config.go:319] "Starting node config controller"
	I0722 10:43:02.099560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:43:02.198071       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:43:02.199257       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:43:02.200460       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [62faf85b5f998c5ec5dff797db8fda2b56caaedf6b7d9f5c4f4f3dc81b34ea44] <==
	I0722 10:42:28.390524       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:42:28.400674       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	I0722 10:42:28.472844       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:42:28.472938       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:42:28.472959       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:42:28.481324       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:42:28.481508       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:42:28.481520       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:42:28.486198       1 config.go:192] "Starting service config controller"
	I0722 10:42:28.486216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:42:28.486239       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:42:28.486243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:42:28.486706       1 config.go:319] "Starting node config controller"
	I0722 10:42:28.486714       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:42:28.586967       1 shared_informer.go:320] Caches are synced for node config
	I0722 10:42:28.587045       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:42:28.587062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3a044c8f12e050a2d045f22c804d1f67857cf1e3940378ece95f34dbabca4224] <==
	I0722 10:42:59.151522       1 serving.go:380] Generated self-signed cert in-memory
	W0722 10:43:01.048402       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 10:43:01.048440       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:43:01.048450       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 10:43:01.048456       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 10:43:01.112731       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 10:43:01.112769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:43:01.114309       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 10:43:01.114401       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 10:43:01.114426       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 10:43:01.114440       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:43:01.215539       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c4651856ff845f45600a572a5d6b1e55f58b602b19744f6003c9788e2c26818d] <==
	I0722 10:42:25.341993       1 serving.go:380] Generated self-signed cert in-memory
	W0722 10:42:27.489527       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 10:42:27.489711       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:42:27.489744       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 10:42:27.489820       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 10:42:27.546859       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 10:42:27.547004       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:42:27.555948       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 10:42:27.555993       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 10:42:27.556841       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 10:42:27.557084       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:42:27.656272       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 10:42:47.222667       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0722 10:42:47.222852       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0722 10:42:47.223160       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0722 10:42:47.223536       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 10:43:44 functional-941610 kubelet[4820]: I0722 10:43:44.834004    4820 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b5c600-5f6b-4913-8638-d26b7e466b73-kube-api-access-9snnn" (OuterVolumeSpecName: "kube-api-access-9snnn") pod "31b5c600-5f6b-4913-8638-d26b7e466b73" (UID: "31b5c600-5f6b-4913-8638-d26b7e466b73"). InnerVolumeSpecName "kube-api-access-9snnn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 22 10:43:44 functional-941610 kubelet[4820]: I0722 10:43:44.931400    4820 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9snnn\" (UniqueName: \"kubernetes.io/projected/31b5c600-5f6b-4913-8638-d26b7e466b73-kube-api-access-9snnn\") on node \"functional-941610\" DevicePath \"\""
	Jul 22 10:43:44 functional-941610 kubelet[4820]: I0722 10:43:44.931498    4820 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/31b5c600-5f6b-4913-8638-d26b7e466b73-test-volume\") on node \"functional-941610\" DevicePath \"\""
	Jul 22 10:43:45 functional-941610 kubelet[4820]: I0722 10:43:45.046491    4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439a5cc95a6ae33c5466d917bba4f86593b117bacef94de64dc446167319b4b3"
	Jul 22 10:43:48 functional-941610 kubelet[4820]: I0722 10:43:48.374230    4820 topology_manager.go:215] "Topology Admit Handler" podUID="191e5936-c1f0-47ef-963d-12a1c34225bb" podNamespace="default" podName="mysql-64454c8b5c-k7ctz"
	Jul 22 10:43:48 functional-941610 kubelet[4820]: E0722 10:43:48.374307    4820 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31b5c600-5f6b-4913-8638-d26b7e466b73" containerName="mount-munger"
	Jul 22 10:43:48 functional-941610 kubelet[4820]: I0722 10:43:48.374344    4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b5c600-5f6b-4913-8638-d26b7e466b73" containerName="mount-munger"
	Jul 22 10:43:48 functional-941610 kubelet[4820]: I0722 10:43:48.464261    4820 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6slz\" (UniqueName: \"kubernetes.io/projected/191e5936-c1f0-47ef-963d-12a1c34225bb-kube-api-access-r6slz\") pod \"mysql-64454c8b5c-k7ctz\" (UID: \"191e5936-c1f0-47ef-963d-12a1c34225bb\") " pod="default/mysql-64454c8b5c-k7ctz"
	Jul 22 10:43:52 functional-941610 kubelet[4820]: I0722 10:43:52.452293    4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-kpj7q" podStartSLOduration=2.789533613 podStartE2EDuration="9.452276627s" podCreationTimestamp="2024-07-22 10:43:43 +0000 UTC" firstStartedPulling="2024-07-22 10:43:44.439639656 +0000 UTC m=+47.075826514" lastFinishedPulling="2024-07-22 10:43:51.10238267 +0000 UTC m=+53.738569528" observedRunningTime="2024-07-22 10:43:52.45225524 +0000 UTC m=+55.088442117" watchObservedRunningTime="2024-07-22 10:43:52.452276627 +0000 UTC m=+55.088463499"
	Jul 22 10:43:52 functional-941610 kubelet[4820]: I0722 10:43:52.452509    4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-f6mzb" podStartSLOduration=4.284863747 podStartE2EDuration="9.452504571s" podCreationTimestamp="2024-07-22 10:43:43 +0000 UTC" firstStartedPulling="2024-07-22 10:43:44.43962635 +0000 UTC m=+47.075813208" lastFinishedPulling="2024-07-22 10:43:49.60726716 +0000 UTC m=+52.243454032" observedRunningTime="2024-07-22 10:43:50.114798408 +0000 UTC m=+52.750985302" watchObservedRunningTime="2024-07-22 10:43:52.452504571 +0000 UTC m=+55.088691448"
	Jul 22 10:43:57 functional-941610 kubelet[4820]: E0722 10:43:57.615124    4820 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:43:57 functional-941610 kubelet[4820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:43:57 functional-941610 kubelet[4820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:43:57 functional-941610 kubelet[4820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:43:57 functional-941610 kubelet[4820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:44:57 functional-941610 kubelet[4820]: E0722 10:44:57.603557    4820 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:44:57 functional-941610 kubelet[4820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:44:57 functional-941610 kubelet[4820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:44:57 functional-941610 kubelet[4820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:44:57 functional-941610 kubelet[4820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:45:57 functional-941610 kubelet[4820]: E0722 10:45:57.594269    4820 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:45:57 functional-941610 kubelet[4820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:45:57 functional-941610 kubelet[4820]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:45:57 functional-941610 kubelet[4820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:45:57 functional-941610 kubelet[4820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [663b9a59358601206ca002cab1c16f667f7e8ed01a7cf5389b51031485ae176a] <==
	2024/07/22 10:43:49 Starting overwatch
	2024/07/22 10:43:49 Using namespace: kubernetes-dashboard
	2024/07/22 10:43:49 Using in-cluster config to connect to apiserver
	2024/07/22 10:43:49 Using secret token for csrf signing
	2024/07/22 10:43:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/22 10:43:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/22 10:43:49 Successful initial request to the apiserver, version: v1.30.3
	2024/07/22 10:43:49 Generating JWE encryption key
	2024/07/22 10:43:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/22 10:43:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/22 10:43:49 Initializing JWE encryption key from synchronized object
	2024/07/22 10:43:49 Creating in-cluster Sidecar client
	2024/07/22 10:43:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/22 10:43:50 Serving insecurely on HTTP port: 9090
	2024/07/22 10:44:20 Successful request to sidecar
	
	
	==> storage-provisioner [028c9a2ec438acde0e057a38569d26de16b3d96be57bf1fc886df70a2bab05bc] <==
	I0722 10:43:01.921772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:43:01.949168       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:43:01.952924       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:43:19.362855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:43:19.363228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-941610_44e33601-0d69-46f5-89a0-378724e9a1d0!
	I0722 10:43:19.363758       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"307accfd-fe2b-482c-8fc3-d4d846c7abee", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-941610_44e33601-0d69-46f5-89a0-378724e9a1d0 became leader
	I0722 10:43:19.464431       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-941610_44e33601-0d69-46f5-89a0-378724e9a1d0!
	I0722 10:43:34.468692       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0722 10:43:34.468862       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    8870077f-ea96-4bd3-9cf7-c1afc4097839 397 0 2024-07-22 10:42:09 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-22 10:42:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-7f507eb4-c031-4a79-8110-79ceec91cfe5 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  7f507eb4-c031-4a79-8110-79ceec91cfe5 696 0 2024-07-22 10:43:34 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-22 10:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-22 10:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0722 10:43:34.470516       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-7f507eb4-c031-4a79-8110-79ceec91cfe5" provisioned
	I0722 10:43:34.470680       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0722 10:43:34.470692       1 volume_store.go:212] Trying to save persistentvolume "pvc-7f507eb4-c031-4a79-8110-79ceec91cfe5"
	I0722 10:43:34.469854       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7f507eb4-c031-4a79-8110-79ceec91cfe5", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0722 10:43:34.484830       1 volume_store.go:219] persistentvolume "pvc-7f507eb4-c031-4a79-8110-79ceec91cfe5" saved
	I0722 10:43:34.485966       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7f507eb4-c031-4a79-8110-79ceec91cfe5", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7f507eb4-c031-4a79-8110-79ceec91cfe5
	
	
	==> storage-provisioner [f718ed4fbb292653a4589346f22aebce7d4a9bffaa93b143ffe1904e213a0864] <==
	I0722 10:42:28.289996       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:42:28.305028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:42:28.305071       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:42:45.708793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:42:45.709139       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-941610_47ae3771-e9f9-4ef6-bdba-f9a2a0b17970!
	I0722 10:42:45.710090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"307accfd-fe2b-482c-8fc3-d4d846c7abee", APIVersion:"v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-941610_47ae3771-e9f9-4ef6-bdba-f9a2a0b17970 became leader
	I0722 10:42:45.809870       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-941610_47ae3771-e9f9-4ef6-bdba-f9a2a0b17970!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-941610 -n functional-941610
helpers_test.go:261: (dbg) Run:  kubectl --context functional-941610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-941610 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-941610 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-941610/192.168.39.245
	Start Time:       Mon, 22 Jul 2024 10:43:40 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e72a17afa0fff256f7794573fc2c37cf0a954100e8beed751aef8754e77d3a21
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 22 Jul 2024 10:43:42 +0000
	      Finished:     Mon, 22 Jul 2024 10:43:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9snnn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9snnn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m57s  default-scheduler  Successfully assigned default/busybox-mount to functional-941610
	  Normal  Pulling    2m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m55s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 942ms (942ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m55s  kubelet            Created container mount-munger
	  Normal  Started    2m55s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-941610/192.168.39.245
	Start Time:       Mon, 22 Jul 2024 10:43:34 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph9h9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-ph9h9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m3s  default-scheduler  Successfully assigned default/sp-pod to functional-941610

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 node stop m02 -v=7 --alsologtostderr
E0722 10:51:36.610644   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.451747797s)

                                                
                                                
-- stdout --
	* Stopping node "ha-461283-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:51:14.517326   28215 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:51:14.517601   28215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:51:14.517611   28215 out.go:304] Setting ErrFile to fd 2...
	I0722 10:51:14.517615   28215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:51:14.517843   28215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:51:14.518115   28215 mustload.go:65] Loading cluster: ha-461283
	I0722 10:51:14.518516   28215 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:51:14.518532   28215 stop.go:39] StopHost: ha-461283-m02
	I0722 10:51:14.518913   28215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:51:14.518959   28215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:51:14.534292   28215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0722 10:51:14.534705   28215 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:51:14.535307   28215 main.go:141] libmachine: Using API Version  1
	I0722 10:51:14.535341   28215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:51:14.535665   28215 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:51:14.537861   28215 out.go:177] * Stopping node "ha-461283-m02"  ...
	I0722 10:51:14.539117   28215 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 10:51:14.539137   28215 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:51:14.539444   28215 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 10:51:14.539472   28215 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:51:14.542288   28215 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:51:14.542733   28215 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:51:14.542758   28215 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:51:14.542915   28215 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:51:14.543097   28215 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:51:14.543234   28215 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:51:14.543399   28215 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:51:14.635079   28215 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 10:51:14.688450   28215 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 10:51:14.743320   28215 main.go:141] libmachine: Stopping "ha-461283-m02"...
	I0722 10:51:14.743358   28215 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:51:14.744731   28215 main.go:141] libmachine: (ha-461283-m02) Calling .Stop
	I0722 10:51:14.747805   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 0/120
	I0722 10:51:15.749099   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 1/120
	I0722 10:51:16.750318   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 2/120
	I0722 10:51:17.751577   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 3/120
	I0722 10:51:18.753097   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 4/120
	I0722 10:51:19.754726   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 5/120
	I0722 10:51:20.756055   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 6/120
	I0722 10:51:21.757319   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 7/120
	I0722 10:51:22.759654   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 8/120
	I0722 10:51:23.760993   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 9/120
	I0722 10:51:24.762943   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 10/120
	I0722 10:51:25.764240   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 11/120
	I0722 10:51:26.765947   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 12/120
	I0722 10:51:27.767309   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 13/120
	I0722 10:51:28.769006   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 14/120
	I0722 10:51:29.770819   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 15/120
	I0722 10:51:30.772447   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 16/120
	I0722 10:51:31.773615   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 17/120
	I0722 10:51:32.775022   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 18/120
	I0722 10:51:33.776294   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 19/120
	I0722 10:51:34.777846   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 20/120
	I0722 10:51:35.779292   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 21/120
	I0722 10:51:36.780644   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 22/120
	I0722 10:51:37.782701   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 23/120
	I0722 10:51:38.784028   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 24/120
	I0722 10:51:39.785999   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 25/120
	I0722 10:51:40.787151   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 26/120
	I0722 10:51:41.788653   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 27/120
	I0722 10:51:42.789941   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 28/120
	I0722 10:51:43.791140   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 29/120
	I0722 10:51:44.792809   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 30/120
	I0722 10:51:45.794805   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 31/120
	I0722 10:51:46.795955   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 32/120
	I0722 10:51:47.797321   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 33/120
	I0722 10:51:48.798602   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 34/120
	I0722 10:51:49.800126   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 35/120
	I0722 10:51:50.801149   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 36/120
	I0722 10:51:51.802431   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 37/120
	I0722 10:51:52.804067   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 38/120
	I0722 10:51:53.805276   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 39/120
	I0722 10:51:54.807114   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 40/120
	I0722 10:51:55.808368   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 41/120
	I0722 10:51:56.810181   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 42/120
	I0722 10:51:57.811613   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 43/120
	I0722 10:51:58.812992   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 44/120
	I0722 10:51:59.814787   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 45/120
	I0722 10:52:00.815987   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 46/120
	I0722 10:52:01.817338   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 47/120
	I0722 10:52:02.818769   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 48/120
	I0722 10:52:03.820103   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 49/120
	I0722 10:52:04.821699   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 50/120
	I0722 10:52:05.823022   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 51/120
	I0722 10:52:06.824191   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 52/120
	I0722 10:52:07.825431   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 53/120
	I0722 10:52:08.826705   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 54/120
	I0722 10:52:09.829000   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 55/120
	I0722 10:52:10.830573   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 56/120
	I0722 10:52:11.832910   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 57/120
	I0722 10:52:12.834913   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 58/120
	I0722 10:52:13.836257   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 59/120
	I0722 10:52:14.838034   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 60/120
	I0722 10:52:15.839178   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 61/120
	I0722 10:52:16.840748   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 62/120
	I0722 10:52:17.842806   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 63/120
	I0722 10:52:18.844093   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 64/120
	I0722 10:52:19.845800   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 65/120
	I0722 10:52:20.847079   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 66/120
	I0722 10:52:21.848256   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 67/120
	I0722 10:52:22.849673   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 68/120
	I0722 10:52:23.851346   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 69/120
	I0722 10:52:24.853240   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 70/120
	I0722 10:52:25.854684   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 71/120
	I0722 10:52:26.856009   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 72/120
	I0722 10:52:27.857365   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 73/120
	I0722 10:52:28.858709   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 74/120
	I0722 10:52:29.860136   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 75/120
	I0722 10:52:30.861529   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 76/120
	I0722 10:52:31.863770   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 77/120
	I0722 10:52:32.865132   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 78/120
	I0722 10:52:33.867449   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 79/120
	I0722 10:52:34.869389   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 80/120
	I0722 10:52:35.870613   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 81/120
	I0722 10:52:36.871818   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 82/120
	I0722 10:52:37.873574   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 83/120
	I0722 10:52:38.874808   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 84/120
	I0722 10:52:39.876661   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 85/120
	I0722 10:52:40.878908   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 86/120
	I0722 10:52:41.880272   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 87/120
	I0722 10:52:42.881629   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 88/120
	I0722 10:52:43.883533   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 89/120
	I0722 10:52:44.885733   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 90/120
	I0722 10:52:45.887163   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 91/120
	I0722 10:52:46.888436   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 92/120
	I0722 10:52:47.889715   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 93/120
	I0722 10:52:48.890835   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 94/120
	I0722 10:52:49.892808   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 95/120
	I0722 10:52:50.894605   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 96/120
	I0722 10:52:51.895817   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 97/120
	I0722 10:52:52.897041   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 98/120
	I0722 10:52:53.898289   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 99/120
	I0722 10:52:54.900226   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 100/120
	I0722 10:52:55.901652   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 101/120
	I0722 10:52:56.903753   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 102/120
	I0722 10:52:57.905066   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 103/120
	I0722 10:52:58.906872   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 104/120
	I0722 10:52:59.908425   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 105/120
	I0722 10:53:00.909737   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 106/120
	I0722 10:53:01.911106   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 107/120
	I0722 10:53:02.912620   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 108/120
	I0722 10:53:03.914682   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 109/120
	I0722 10:53:04.916619   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 110/120
	I0722 10:53:05.917911   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 111/120
	I0722 10:53:06.919019   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 112/120
	I0722 10:53:07.920268   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 113/120
	I0722 10:53:08.921551   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 114/120
	I0722 10:53:09.923379   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 115/120
	I0722 10:53:10.924792   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 116/120
	I0722 10:53:11.926060   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 117/120
	I0722 10:53:12.927437   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 118/120
	I0722 10:53:13.928654   28215 main.go:141] libmachine: (ha-461283-m02) Waiting for machine to stop 119/120
	I0722 10:53:14.929988   28215 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 10:53:14.930133   28215 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-461283 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
E0722 10:53:29.088924   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (18.979347339s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:53:14.973742   28657 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:53:14.974026   28657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:14.974038   28657 out.go:304] Setting ErrFile to fd 2...
	I0722 10:53:14.974045   28657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:14.974323   28657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:53:14.974521   28657 out.go:298] Setting JSON to false
	I0722 10:53:14.974560   28657 mustload.go:65] Loading cluster: ha-461283
	I0722 10:53:14.974667   28657 notify.go:220] Checking for updates...
	I0722 10:53:14.974981   28657 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:53:14.974999   28657 status.go:255] checking status of ha-461283 ...
	I0722 10:53:14.975360   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:14.975408   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:14.993309   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36451
	I0722 10:53:14.993704   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:14.994257   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:14.994279   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:14.994666   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:14.994873   28657 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:53:14.996707   28657 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:53:14.996723   28657 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:14.997092   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:14.997128   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:15.011596   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38751
	I0722 10:53:15.011941   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:15.012418   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:15.012459   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:15.012768   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:15.012935   28657 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:53:15.015500   28657 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:15.015942   28657 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:15.015971   28657 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:15.016134   28657 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:15.016425   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:15.016459   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:15.031207   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0722 10:53:15.031523   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:15.031919   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:15.031936   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:15.032253   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:15.032434   28657 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:53:15.032607   28657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:15.032624   28657 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:53:15.035128   28657 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:15.035592   28657 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:15.035620   28657 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:15.035747   28657 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:53:15.035930   28657 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:53:15.036091   28657 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:53:15.036223   28657 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:53:15.121190   28657 ssh_runner.go:195] Run: systemctl --version
	I0722 10:53:15.128214   28657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:15.146292   28657 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:15.146314   28657 api_server.go:166] Checking apiserver status ...
	I0722 10:53:15.146340   28657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:15.162684   28657 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:53:15.173746   28657 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:15.173802   28657 ssh_runner.go:195] Run: ls
	I0722 10:53:15.178120   28657 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:15.184140   28657 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:15.184165   28657 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:53:15.184197   28657 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:15.184222   28657 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:53:15.184599   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:15.184632   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:15.199888   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0722 10:53:15.200253   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:15.200698   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:15.200716   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:15.201079   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:15.201238   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:53:15.202742   28657 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:53:15.202757   28657 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:15.203033   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:15.203061   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:15.216951   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0722 10:53:15.217339   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:15.217739   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:15.217769   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:15.218095   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:15.218245   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:53:15.220826   28657 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:15.221187   28657 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:15.221210   28657 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:15.221339   28657 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:15.221726   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:15.221761   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:15.235677   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0722 10:53:15.236101   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:15.236520   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:15.236539   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:15.236814   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:15.236991   28657 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:53:15.237201   28657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:15.237223   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:53:15.239496   28657 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:15.239821   28657 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:15.239856   28657 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:15.239995   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:53:15.240154   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:53:15.240263   28657 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:53:15.240358   28657 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:53:33.556551   28657 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:33.556630   28657 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:53:33.556644   28657 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:33.556651   28657 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:53:33.556666   28657 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:33.556674   28657 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:53:33.556966   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.557001   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.571353   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0722 10:53:33.571764   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.572239   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.572262   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.572617   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.572795   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:53:33.574242   28657 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:53:33.574260   28657 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:33.574564   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.574613   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.589329   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
	I0722 10:53:33.589690   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.590107   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.590124   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.590424   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.590621   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:53:33.593093   28657 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:33.593552   28657 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:33.593576   28657 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:33.593759   28657 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:33.594049   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.594089   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.608947   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0722 10:53:33.609347   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.609882   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.609906   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.610227   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.610386   28657 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:53:33.610563   28657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:33.610584   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:53:33.612957   28657 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:33.613345   28657 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:33.613389   28657 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:33.613609   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:53:33.613762   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:53:33.613920   28657 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:53:33.614158   28657 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:53:33.701359   28657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:33.719391   28657 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:33.719417   28657 api_server.go:166] Checking apiserver status ...
	I0722 10:53:33.719457   28657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:33.735934   28657 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:53:33.745459   28657 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:33.745512   28657 ssh_runner.go:195] Run: ls
	I0722 10:53:33.749780   28657 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:33.753859   28657 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:33.753879   28657 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:53:33.753898   28657 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:33.753917   28657 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:53:33.754271   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.754308   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.768847   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0722 10:53:33.769295   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.769770   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.769790   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.770095   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.770273   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:53:33.771783   28657 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:53:33.771800   28657 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:33.772182   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.772229   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.786829   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I0722 10:53:33.787211   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.787598   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.787611   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.787923   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.788096   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:53:33.790932   28657 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:33.791337   28657 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:33.791365   28657 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:33.791491   28657 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:33.791771   28657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:33.791803   28657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:33.805521   28657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41713
	I0722 10:53:33.805903   28657 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:33.806358   28657 main.go:141] libmachine: Using API Version  1
	I0722 10:53:33.806385   28657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:33.806624   28657 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:33.806784   28657 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:53:33.806924   28657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:33.806943   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:53:33.809542   28657 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:33.809894   28657 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:33.809921   28657 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:33.810032   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:53:33.810162   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:53:33.810315   28657 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:53:33.810425   28657 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:53:33.893461   28657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:33.909566   28657 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-461283 -n ha-461283
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-461283 logs -n 25: (1.368984749s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m03_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m04 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp testdata/cp-test.txt                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m04_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03:/home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m03 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-461283 node stop m02 -v=7                                                     | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:46:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:46:38.194055   24174 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:46:38.194160   24174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:46:38.194171   24174 out.go:304] Setting ErrFile to fd 2...
	I0722 10:46:38.194176   24174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:46:38.194345   24174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:46:38.194890   24174 out.go:298] Setting JSON to false
	I0722 10:46:38.195769   24174 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1750,"bootTime":1721643448,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:46:38.195821   24174 start.go:139] virtualization: kvm guest
	I0722 10:46:38.197620   24174 out.go:177] * [ha-461283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:46:38.198991   24174 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:46:38.198999   24174 notify.go:220] Checking for updates...
	I0722 10:46:38.200433   24174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:46:38.201651   24174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:46:38.202977   24174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.204061   24174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:46:38.205109   24174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:46:38.206337   24174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:46:38.239044   24174 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 10:46:38.240138   24174 start.go:297] selected driver: kvm2
	I0722 10:46:38.240155   24174 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:46:38.240180   24174 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:46:38.240938   24174 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:46:38.241043   24174 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:46:38.254722   24174 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:46:38.254755   24174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:46:38.254971   24174 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:46:38.255017   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:46:38.255028   24174 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 10:46:38.255034   24174 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 10:46:38.255094   24174 start.go:340] cluster config:
	{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0722 10:46:38.255187   24174 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:46:38.256698   24174 out.go:177] * Starting "ha-461283" primary control-plane node in "ha-461283" cluster
	I0722 10:46:38.257819   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:46:38.257842   24174 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:46:38.257848   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:46:38.257917   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:46:38.257927   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:46:38.258204   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:46:38.258224   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json: {Name:mk97f47cbaa54f35c862f0dd28f13f83cf708a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:46:38.258341   24174 start.go:360] acquireMachinesLock for ha-461283: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:46:38.258374   24174 start.go:364] duration metric: took 21.789µs to acquireMachinesLock for "ha-461283"
	I0722 10:46:38.258394   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:46:38.258442   24174 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 10:46:38.259793   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:46:38.259890   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:46:38.259924   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:46:38.273144   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0722 10:46:38.273503   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:46:38.273966   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:46:38.273993   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:46:38.274285   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:46:38.274483   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:38.274631   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:38.274770   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:46:38.274797   24174 client.go:168] LocalClient.Create starting
	I0722 10:46:38.274835   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:46:38.274877   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:46:38.274897   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:46:38.274974   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:46:38.275005   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:46:38.275024   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:46:38.275048   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:46:38.275069   24174 main.go:141] libmachine: (ha-461283) Calling .PreCreateCheck
	I0722 10:46:38.275386   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:38.275778   24174 main.go:141] libmachine: Creating machine...
	I0722 10:46:38.275794   24174 main.go:141] libmachine: (ha-461283) Calling .Create
	I0722 10:46:38.275914   24174 main.go:141] libmachine: (ha-461283) Creating KVM machine...
	I0722 10:46:38.277030   24174 main.go:141] libmachine: (ha-461283) DBG | found existing default KVM network
	I0722 10:46:38.277623   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.277507   24197 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0722 10:46:38.277646   24174 main.go:141] libmachine: (ha-461283) DBG | created network xml: 
	I0722 10:46:38.277655   24174 main.go:141] libmachine: (ha-461283) DBG | <network>
	I0722 10:46:38.277660   24174 main.go:141] libmachine: (ha-461283) DBG |   <name>mk-ha-461283</name>
	I0722 10:46:38.277666   24174 main.go:141] libmachine: (ha-461283) DBG |   <dns enable='no'/>
	I0722 10:46:38.277669   24174 main.go:141] libmachine: (ha-461283) DBG |   
	I0722 10:46:38.277675   24174 main.go:141] libmachine: (ha-461283) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 10:46:38.277681   24174 main.go:141] libmachine: (ha-461283) DBG |     <dhcp>
	I0722 10:46:38.277691   24174 main.go:141] libmachine: (ha-461283) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 10:46:38.277711   24174 main.go:141] libmachine: (ha-461283) DBG |     </dhcp>
	I0722 10:46:38.277720   24174 main.go:141] libmachine: (ha-461283) DBG |   </ip>
	I0722 10:46:38.277729   24174 main.go:141] libmachine: (ha-461283) DBG |   
	I0722 10:46:38.277741   24174 main.go:141] libmachine: (ha-461283) DBG | </network>
	I0722 10:46:38.277746   24174 main.go:141] libmachine: (ha-461283) DBG | 
	I0722 10:46:38.282356   24174 main.go:141] libmachine: (ha-461283) DBG | trying to create private KVM network mk-ha-461283 192.168.39.0/24...
	I0722 10:46:38.343389   24174 main.go:141] libmachine: (ha-461283) DBG | private KVM network mk-ha-461283 192.168.39.0/24 created
	I0722 10:46:38.343429   24174 main.go:141] libmachine: (ha-461283) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 ...
	I0722 10:46:38.343444   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.343373   24197 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.343456   24174 main.go:141] libmachine: (ha-461283) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:46:38.343486   24174 main.go:141] libmachine: (ha-461283) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:46:38.577561   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.577453   24197 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa...
	I0722 10:46:38.713410   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.713279   24197 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/ha-461283.rawdisk...
	I0722 10:46:38.713441   24174 main.go:141] libmachine: (ha-461283) DBG | Writing magic tar header
	I0722 10:46:38.713455   24174 main.go:141] libmachine: (ha-461283) DBG | Writing SSH key tar header
	I0722 10:46:38.713468   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.713386   24197 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 ...
	I0722 10:46:38.713482   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283
	I0722 10:46:38.713578   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:46:38.713602   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 (perms=drwx------)
	I0722 10:46:38.713614   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.713627   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:46:38.713638   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:46:38.713654   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:46:38.713666   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:46:38.713679   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:46:38.713686   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:46:38.713701   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:46:38.713714   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:46:38.713728   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home
	I0722 10:46:38.713740   24174 main.go:141] libmachine: (ha-461283) Creating domain...
	I0722 10:46:38.713759   24174 main.go:141] libmachine: (ha-461283) DBG | Skipping /home - not owner
	I0722 10:46:38.714729   24174 main.go:141] libmachine: (ha-461283) define libvirt domain using xml: 
	I0722 10:46:38.714787   24174 main.go:141] libmachine: (ha-461283) <domain type='kvm'>
	I0722 10:46:38.714800   24174 main.go:141] libmachine: (ha-461283)   <name>ha-461283</name>
	I0722 10:46:38.714808   24174 main.go:141] libmachine: (ha-461283)   <memory unit='MiB'>2200</memory>
	I0722 10:46:38.714818   24174 main.go:141] libmachine: (ha-461283)   <vcpu>2</vcpu>
	I0722 10:46:38.714841   24174 main.go:141] libmachine: (ha-461283)   <features>
	I0722 10:46:38.714855   24174 main.go:141] libmachine: (ha-461283)     <acpi/>
	I0722 10:46:38.714864   24174 main.go:141] libmachine: (ha-461283)     <apic/>
	I0722 10:46:38.714875   24174 main.go:141] libmachine: (ha-461283)     <pae/>
	I0722 10:46:38.714898   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.714912   24174 main.go:141] libmachine: (ha-461283)   </features>
	I0722 10:46:38.714921   24174 main.go:141] libmachine: (ha-461283)   <cpu mode='host-passthrough'>
	I0722 10:46:38.714931   24174 main.go:141] libmachine: (ha-461283)   
	I0722 10:46:38.714942   24174 main.go:141] libmachine: (ha-461283)   </cpu>
	I0722 10:46:38.714954   24174 main.go:141] libmachine: (ha-461283)   <os>
	I0722 10:46:38.714962   24174 main.go:141] libmachine: (ha-461283)     <type>hvm</type>
	I0722 10:46:38.714998   24174 main.go:141] libmachine: (ha-461283)     <boot dev='cdrom'/>
	I0722 10:46:38.715021   24174 main.go:141] libmachine: (ha-461283)     <boot dev='hd'/>
	I0722 10:46:38.715033   24174 main.go:141] libmachine: (ha-461283)     <bootmenu enable='no'/>
	I0722 10:46:38.715043   24174 main.go:141] libmachine: (ha-461283)   </os>
	I0722 10:46:38.715054   24174 main.go:141] libmachine: (ha-461283)   <devices>
	I0722 10:46:38.715065   24174 main.go:141] libmachine: (ha-461283)     <disk type='file' device='cdrom'>
	I0722 10:46:38.715080   24174 main.go:141] libmachine: (ha-461283)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/boot2docker.iso'/>
	I0722 10:46:38.715095   24174 main.go:141] libmachine: (ha-461283)       <target dev='hdc' bus='scsi'/>
	I0722 10:46:38.715106   24174 main.go:141] libmachine: (ha-461283)       <readonly/>
	I0722 10:46:38.715115   24174 main.go:141] libmachine: (ha-461283)     </disk>
	I0722 10:46:38.715126   24174 main.go:141] libmachine: (ha-461283)     <disk type='file' device='disk'>
	I0722 10:46:38.715138   24174 main.go:141] libmachine: (ha-461283)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:46:38.715153   24174 main.go:141] libmachine: (ha-461283)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/ha-461283.rawdisk'/>
	I0722 10:46:38.715164   24174 main.go:141] libmachine: (ha-461283)       <target dev='hda' bus='virtio'/>
	I0722 10:46:38.715176   24174 main.go:141] libmachine: (ha-461283)     </disk>
	I0722 10:46:38.715185   24174 main.go:141] libmachine: (ha-461283)     <interface type='network'>
	I0722 10:46:38.715197   24174 main.go:141] libmachine: (ha-461283)       <source network='mk-ha-461283'/>
	I0722 10:46:38.715207   24174 main.go:141] libmachine: (ha-461283)       <model type='virtio'/>
	I0722 10:46:38.715216   24174 main.go:141] libmachine: (ha-461283)     </interface>
	I0722 10:46:38.715226   24174 main.go:141] libmachine: (ha-461283)     <interface type='network'>
	I0722 10:46:38.715237   24174 main.go:141] libmachine: (ha-461283)       <source network='default'/>
	I0722 10:46:38.715247   24174 main.go:141] libmachine: (ha-461283)       <model type='virtio'/>
	I0722 10:46:38.715291   24174 main.go:141] libmachine: (ha-461283)     </interface>
	I0722 10:46:38.715309   24174 main.go:141] libmachine: (ha-461283)     <serial type='pty'>
	I0722 10:46:38.715319   24174 main.go:141] libmachine: (ha-461283)       <target port='0'/>
	I0722 10:46:38.715329   24174 main.go:141] libmachine: (ha-461283)     </serial>
	I0722 10:46:38.715341   24174 main.go:141] libmachine: (ha-461283)     <console type='pty'>
	I0722 10:46:38.715358   24174 main.go:141] libmachine: (ha-461283)       <target type='serial' port='0'/>
	I0722 10:46:38.715381   24174 main.go:141] libmachine: (ha-461283)     </console>
	I0722 10:46:38.715392   24174 main.go:141] libmachine: (ha-461283)     <rng model='virtio'>
	I0722 10:46:38.715404   24174 main.go:141] libmachine: (ha-461283)       <backend model='random'>/dev/random</backend>
	I0722 10:46:38.715413   24174 main.go:141] libmachine: (ha-461283)     </rng>
	I0722 10:46:38.715421   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.715430   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.715441   24174 main.go:141] libmachine: (ha-461283)   </devices>
	I0722 10:46:38.715461   24174 main.go:141] libmachine: (ha-461283) </domain>
	I0722 10:46:38.715475   24174 main.go:141] libmachine: (ha-461283) 
	I0722 10:46:38.719160   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:5d:41:e6 in network default
	I0722 10:46:38.719639   24174 main.go:141] libmachine: (ha-461283) Ensuring networks are active...
	I0722 10:46:38.719654   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:38.720334   24174 main.go:141] libmachine: (ha-461283) Ensuring network default is active
	I0722 10:46:38.720652   24174 main.go:141] libmachine: (ha-461283) Ensuring network mk-ha-461283 is active
	I0722 10:46:38.721108   24174 main.go:141] libmachine: (ha-461283) Getting domain xml...
	I0722 10:46:38.721719   24174 main.go:141] libmachine: (ha-461283) Creating domain...
	I0722 10:46:39.878056   24174 main.go:141] libmachine: (ha-461283) Waiting to get IP...
	I0722 10:46:39.878814   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:39.879213   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:39.879239   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:39.879169   24197 retry.go:31] will retry after 211.051521ms: waiting for machine to come up
	I0722 10:46:40.091502   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.091910   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.091938   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.091865   24197 retry.go:31] will retry after 243.80033ms: waiting for machine to come up
	I0722 10:46:40.337448   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.337829   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.337860   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.337793   24197 retry.go:31] will retry after 313.296222ms: waiting for machine to come up
	I0722 10:46:40.652162   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.652703   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.652730   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.652659   24197 retry.go:31] will retry after 491.357157ms: waiting for machine to come up
	I0722 10:46:41.145220   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:41.145735   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:41.145755   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:41.145693   24197 retry.go:31] will retry after 713.551121ms: waiting for machine to come up
	I0722 10:46:41.860641   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:41.861057   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:41.861085   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:41.861020   24197 retry.go:31] will retry after 599.546633ms: waiting for machine to come up
	I0722 10:46:42.461744   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:42.462129   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:42.462173   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:42.462100   24197 retry.go:31] will retry after 984.367854ms: waiting for machine to come up
	I0722 10:46:43.448943   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:43.449367   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:43.449395   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:43.449311   24197 retry.go:31] will retry after 1.326982923s: waiting for machine to come up
	I0722 10:46:44.777306   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:44.777665   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:44.777688   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:44.777626   24197 retry.go:31] will retry after 1.827526011s: waiting for machine to come up
	I0722 10:46:46.607846   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:46.608257   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:46.608296   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:46.608222   24197 retry.go:31] will retry after 2.205030482s: waiting for machine to come up
	I0722 10:46:48.814467   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:48.814895   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:48.814922   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:48.814858   24197 retry.go:31] will retry after 2.262882594s: waiting for machine to come up
	I0722 10:46:51.080211   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:51.080642   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:51.080664   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:51.080600   24197 retry.go:31] will retry after 3.047165474s: waiting for machine to come up
	I0722 10:46:54.129188   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:54.129583   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:54.129609   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:54.129546   24197 retry.go:31] will retry after 4.354207961s: waiting for machine to come up
	I0722 10:46:58.484970   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.485388   24174 main.go:141] libmachine: (ha-461283) Found IP for machine: 192.168.39.43
	I0722 10:46:58.485422   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has current primary IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.485431   24174 main.go:141] libmachine: (ha-461283) Reserving static IP address...
	I0722 10:46:58.485749   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find host DHCP lease matching {name: "ha-461283", mac: "52:54:00:1d:42:30", ip: "192.168.39.43"} in network mk-ha-461283
	I0722 10:46:58.551564   24174 main.go:141] libmachine: (ha-461283) DBG | Getting to WaitForSSH function...
	I0722 10:46:58.551595   24174 main.go:141] libmachine: (ha-461283) Reserved static IP address: 192.168.39.43
	I0722 10:46:58.551609   24174 main.go:141] libmachine: (ha-461283) Waiting for SSH to be available...
	I0722 10:46:58.553973   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.554325   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.554361   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.554435   24174 main.go:141] libmachine: (ha-461283) DBG | Using SSH client type: external
	I0722 10:46:58.554469   24174 main.go:141] libmachine: (ha-461283) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa (-rw-------)
	I0722 10:46:58.554495   24174 main.go:141] libmachine: (ha-461283) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:46:58.554507   24174 main.go:141] libmachine: (ha-461283) DBG | About to run SSH command:
	I0722 10:46:58.554519   24174 main.go:141] libmachine: (ha-461283) DBG | exit 0
	I0722 10:46:58.676276   24174 main.go:141] libmachine: (ha-461283) DBG | SSH cmd err, output: <nil>: 
	I0722 10:46:58.676593   24174 main.go:141] libmachine: (ha-461283) KVM machine creation complete!
	I0722 10:46:58.677059   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:58.677560   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:58.677746   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:58.677893   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:46:58.677908   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:46:58.679105   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:46:58.679116   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:46:58.679123   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:46:58.679138   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.681266   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.681691   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.681728   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.681856   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.682022   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.682179   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.682310   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.682472   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.682715   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.682730   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:46:58.783807   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:46:58.783826   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:46:58.783832   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.786347   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.786666   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.786693   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.786919   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.787100   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.787269   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.787384   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.787501   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.787685   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.787698   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:46:58.888947   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:46:58.889019   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:46:58.889029   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:46:58.889038   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:58.889291   24174 buildroot.go:166] provisioning hostname "ha-461283"
	I0722 10:46:58.889315   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:58.889495   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.891793   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.892098   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.892121   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.892266   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.892431   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.892563   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.892682   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.892835   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.893049   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.893067   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283 && echo "ha-461283" | sudo tee /etc/hostname
	I0722 10:46:59.005733   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:46:59.005754   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.008176   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.008431   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.008453   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.008599   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.008776   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.008937   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.009050   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.009170   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.009376   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.009392   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:46:59.117019   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:46:59.117044   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:46:59.117075   24174 buildroot.go:174] setting up certificates
	I0722 10:46:59.117084   24174 provision.go:84] configureAuth start
	I0722 10:46:59.117095   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:59.117349   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.120000   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.120358   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.120399   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.120555   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.122736   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.123042   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.123066   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.123133   24174 provision.go:143] copyHostCerts
	I0722 10:46:59.123171   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:46:59.123208   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:46:59.123238   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:46:59.123316   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:46:59.123404   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:46:59.123435   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:46:59.123444   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:46:59.123480   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:46:59.123547   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:46:59.123570   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:46:59.123578   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:46:59.123608   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:46:59.123667   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283 san=[127.0.0.1 192.168.39.43 ha-461283 localhost minikube]
	I0722 10:46:59.316403   24174 provision.go:177] copyRemoteCerts
	I0722 10:46:59.316458   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:46:59.316480   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.319080   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.319360   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.319380   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.319564   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.319736   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.319891   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.319990   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.399168   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:46:59.399235   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:46:59.423274   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:46:59.423338   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 10:46:59.445969   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:46:59.446021   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:46:59.468209   24174 provision.go:87] duration metric: took 351.114311ms to configureAuth
	I0722 10:46:59.468231   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:46:59.468397   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:46:59.468470   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.470912   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.471209   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.471227   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.471423   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.471612   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.471770   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.471924   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.472084   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.472240   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.472257   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:46:59.731437   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:46:59.731467   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:46:59.731478   24174 main.go:141] libmachine: (ha-461283) Calling .GetURL
	I0722 10:46:59.732656   24174 main.go:141] libmachine: (ha-461283) DBG | Using libvirt version 6000000
	I0722 10:46:59.734495   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.734771   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.734796   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.734958   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:46:59.734980   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:46:59.734992   24174 client.go:171] duration metric: took 21.460185416s to LocalClient.Create
	I0722 10:46:59.735015   24174 start.go:167] duration metric: took 21.460246012s to libmachine.API.Create "ha-461283"
	I0722 10:46:59.735025   24174 start.go:293] postStartSetup for "ha-461283" (driver="kvm2")
	I0722 10:46:59.735035   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:46:59.735051   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.735297   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:46:59.735321   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.737204   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.737493   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.737517   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.737642   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.737815   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.737981   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.738127   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.819805   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:46:59.824341   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:46:59.824367   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:46:59.824465   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:46:59.824587   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:46:59.824600   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:46:59.824708   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:46:59.834595   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:46:59.857726   24174 start.go:296] duration metric: took 122.68973ms for postStartSetup
	I0722 10:46:59.857770   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:59.858306   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.860770   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.861135   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.861160   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.861375   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:46:59.861563   24174 start.go:128] duration metric: took 21.603112314s to createHost
	I0722 10:46:59.861601   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.863578   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.863856   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.863885   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.863991   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.864163   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.864295   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.864429   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.864560   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.864702   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.864712   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:46:59.964927   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645219.938237293
	
	I0722 10:46:59.964946   24174 fix.go:216] guest clock: 1721645219.938237293
	I0722 10:46:59.964953   24174 fix.go:229] Guest: 2024-07-22 10:46:59.938237293 +0000 UTC Remote: 2024-07-22 10:46:59.86157437 +0000 UTC m=+21.708119370 (delta=76.662923ms)
	I0722 10:46:59.964971   24174 fix.go:200] guest clock delta is within tolerance: 76.662923ms
	I0722 10:46:59.964976   24174 start.go:83] releasing machines lock for "ha-461283", held for 21.706593928s
	I0722 10:46:59.964990   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.965205   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.967418   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.967665   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.967693   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.967837   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968278   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968460   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968576   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:46:59.968623   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.968625   24174 ssh_runner.go:195] Run: cat /version.json
	I0722 10:46:59.968644   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.970974   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971073   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971317   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.971343   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971367   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.971383   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971456   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.971615   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.971621   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.971780   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.971793   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.971933   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.971946   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.972070   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:00.075078   24174 ssh_runner.go:195] Run: systemctl --version
	I0722 10:47:00.081128   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:47:00.245140   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:47:00.251293   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:47:00.251349   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:47:00.269861   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:47:00.269885   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:47:00.269940   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:47:00.286084   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:47:00.300419   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:47:00.300490   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:47:00.314748   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:47:00.328509   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:47:00.439875   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:47:00.607897   24174 docker.go:233] disabling docker service ...
	I0722 10:47:00.607966   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:47:00.622144   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:47:00.634895   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:47:00.742615   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:47:00.850525   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:47:00.864521   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:47:00.882277   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:47:00.882346   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.892619   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:47:00.892678   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.903021   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.913199   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.923386   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:47:00.933947   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.944265   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.960405   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.970616   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:47:00.979918   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:47:00.979972   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:47:00.991686   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:47:01.001055   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:01.106372   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:47:01.238492   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:47:01.238570   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:47:01.243407   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:47:01.243452   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:47:01.247174   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:47:01.286447   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:47:01.286530   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:01.314485   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:01.343254   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:47:01.344418   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:47:01.346906   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:01.347301   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:01.347333   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:01.347522   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:47:01.351572   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:01.364615   24174 kubeadm.go:883] updating cluster {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:47:01.364707   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:47:01.364746   24174 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:47:01.396482   24174 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 10:47:01.396559   24174 ssh_runner.go:195] Run: which lz4
	I0722 10:47:01.400470   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0722 10:47:01.400580   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 10:47:01.404612   24174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 10:47:01.404633   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 10:47:02.790648   24174 crio.go:462] duration metric: took 1.390105316s to copy over tarball
	I0722 10:47:02.790722   24174 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 10:47:04.927301   24174 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.136542439s)
	I0722 10:47:04.927336   24174 crio.go:469] duration metric: took 2.136663526s to extract the tarball
	I0722 10:47:04.927345   24174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 10:47:04.965923   24174 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:47:05.015846   24174 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:47:05.015868   24174 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:47:05.015877   24174 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.30.3 crio true true} ...
	I0722 10:47:05.016104   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:47:05.016199   24174 ssh_runner.go:195] Run: crio config
	I0722 10:47:05.060548   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:47:05.060566   24174 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 10:47:05.060576   24174 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:47:05.060601   24174 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-461283 NodeName:ha-461283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:47:05.060750   24174 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-461283"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:47:05.060774   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:47:05.060823   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:47:05.079086   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:47:05.079207   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:47:05.079260   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:05.089756   24174 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:47:05.089823   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 10:47:05.099468   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0722 10:47:05.115987   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:47:05.131994   24174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0722 10:47:05.148077   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0722 10:47:05.164679   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:47:05.168827   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:05.180730   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:05.320481   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:47:05.337743   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.43
	I0722 10:47:05.337764   24174 certs.go:194] generating shared ca certs ...
	I0722 10:47:05.337783   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.337933   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:47:05.337982   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:47:05.337995   24174 certs.go:256] generating profile certs ...
	I0722 10:47:05.338053   24174 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:47:05.338069   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt with IP's: []
	I0722 10:47:05.383714   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt ...
	I0722 10:47:05.383743   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt: {Name:mkb171df70710be618a58bf690afb21e809e5818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.383934   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key ...
	I0722 10:47:05.383948   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key: {Name:mkff020491afb1adea70aef1c3934b3ad6f7ba79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.384050   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9
	I0722 10:47:05.384075   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.254]
	I0722 10:47:05.468803   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 ...
	I0722 10:47:05.468832   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9: {Name:mkb1e692f29ef9c1a8256a9539ef7be1ada40148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.469010   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9 ...
	I0722 10:47:05.469026   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9: {Name:mk338616cb090895bedf9e1ac4cddee28ec5e7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.469130   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:47:05.469220   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:47:05.469298   24174 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:47:05.469320   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt with IP's: []
	I0722 10:47:05.673958   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt ...
	I0722 10:47:05.673990   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt: {Name:mk6f787c87e693afa89eca8ff9fe8efd0b927b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.674166   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key ...
	I0722 10:47:05.674179   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key: {Name:mka2dfacbc83fe7edf41518e908d2a8e0a927e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.674273   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:47:05.674294   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:47:05.674309   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:47:05.674328   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:47:05.674347   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:47:05.674367   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:47:05.674385   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:47:05.674401   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:47:05.674463   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:47:05.674509   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:47:05.674522   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:47:05.674556   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:47:05.674587   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:47:05.674617   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:47:05.674666   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:05.674702   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.674721   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.674740   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:05.675282   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:47:05.700991   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:47:05.724214   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:47:05.746822   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:47:05.770107   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 10:47:05.793099   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:47:05.815971   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:47:05.838649   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:47:05.861123   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:47:05.884059   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:47:05.906560   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:47:05.928529   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:47:05.943858   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:47:05.949452   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:47:05.959767   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.964070   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.964114   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.969978   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:47:05.980549   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:47:05.990675   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.994821   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.994867   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:47:06.000315   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:47:06.010704   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:47:06.021094   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.025386   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.025424   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.030837   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:47:06.041101   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:47:06.045007   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:47:06.045059   24174 kubeadm.go:392] StartCluster: {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:47:06.045125   24174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:47:06.045188   24174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:47:06.084180   24174 cri.go:89] found id: ""
	I0722 10:47:06.084238   24174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 10:47:06.094148   24174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 10:47:06.103367   24174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 10:47:06.115692   24174 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 10:47:06.115713   24174 kubeadm.go:157] found existing configuration files:
	
	I0722 10:47:06.115758   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 10:47:06.124940   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 10:47:06.124985   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 10:47:06.148183   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 10:47:06.159533   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 10:47:06.159604   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 10:47:06.173428   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 10:47:06.187329   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 10:47:06.187387   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 10:47:06.198205   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 10:47:06.207240   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 10:47:06.207293   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 10:47:06.216307   24174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 10:47:06.329090   24174 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 10:47:06.329235   24174 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 10:47:06.471260   24174 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 10:47:06.471393   24174 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 10:47:06.471511   24174 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 10:47:06.681235   24174 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 10:47:06.788849   24174 out.go:204]   - Generating certificates and keys ...
	I0722 10:47:06.788955   24174 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 10:47:06.789033   24174 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 10:47:06.919200   24174 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 10:47:06.980563   24174 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 10:47:07.147794   24174 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 10:47:07.230076   24174 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 10:47:07.496079   24174 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 10:47:07.496246   24174 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-461283 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0722 10:47:07.808389   24174 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 10:47:07.808536   24174 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-461283 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0722 10:47:07.890205   24174 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 10:47:08.131805   24174 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 10:47:08.307800   24174 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 10:47:08.307885   24174 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 10:47:08.467741   24174 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 10:47:08.601683   24174 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 10:47:08.817858   24174 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 10:47:09.028565   24174 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 10:47:09.111319   24174 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 10:47:09.111923   24174 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 10:47:09.114692   24174 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 10:47:09.116366   24174 out.go:204]   - Booting up control plane ...
	I0722 10:47:09.116481   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 10:47:09.116556   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 10:47:09.118341   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 10:47:09.133851   24174 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 10:47:09.134721   24174 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 10:47:09.134783   24174 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 10:47:09.290939   24174 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 10:47:09.291041   24174 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 10:47:09.790524   24174 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.330082ms
	I0722 10:47:09.790609   24174 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 10:47:15.775977   24174 kubeadm.go:310] [api-check] The API server is healthy after 5.989151305s
	I0722 10:47:15.787856   24174 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 10:47:15.805406   24174 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 10:47:15.835921   24174 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 10:47:15.836164   24174 kubeadm.go:310] [mark-control-plane] Marking the node ha-461283 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 10:47:15.847227   24174 kubeadm.go:310] [bootstrap-token] Using token: vshj1k.w5z6g3thto8ie6ws
	I0722 10:47:15.848559   24174 out.go:204]   - Configuring RBAC rules ...
	I0722 10:47:15.848677   24174 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 10:47:15.854509   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 10:47:15.862066   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 10:47:15.869511   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 10:47:15.874443   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 10:47:15.878295   24174 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 10:47:16.183381   24174 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 10:47:16.636868   24174 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 10:47:17.182238   24174 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 10:47:17.183358   24174 kubeadm.go:310] 
	I0722 10:47:17.183451   24174 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 10:47:17.183462   24174 kubeadm.go:310] 
	I0722 10:47:17.183581   24174 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 10:47:17.183602   24174 kubeadm.go:310] 
	I0722 10:47:17.183658   24174 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 10:47:17.183743   24174 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 10:47:17.183807   24174 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 10:47:17.183817   24174 kubeadm.go:310] 
	I0722 10:47:17.183874   24174 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 10:47:17.183882   24174 kubeadm.go:310] 
	I0722 10:47:17.183931   24174 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 10:47:17.183943   24174 kubeadm.go:310] 
	I0722 10:47:17.184017   24174 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 10:47:17.184117   24174 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 10:47:17.184213   24174 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 10:47:17.184222   24174 kubeadm.go:310] 
	I0722 10:47:17.184329   24174 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 10:47:17.184451   24174 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 10:47:17.184461   24174 kubeadm.go:310] 
	I0722 10:47:17.184574   24174 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vshj1k.w5z6g3thto8ie6ws \
	I0722 10:47:17.184671   24174 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 10:47:17.184706   24174 kubeadm.go:310] 	--control-plane 
	I0722 10:47:17.184715   24174 kubeadm.go:310] 
	I0722 10:47:17.184811   24174 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 10:47:17.184821   24174 kubeadm.go:310] 
	I0722 10:47:17.184931   24174 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vshj1k.w5z6g3thto8ie6ws \
	I0722 10:47:17.185062   24174 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 10:47:17.185425   24174 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 10:47:17.185529   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:47:17.185541   24174 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 10:47:17.187201   24174 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 10:47:17.188698   24174 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 10:47:17.193996   24174 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 10:47:17.194014   24174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 10:47:17.213441   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 10:47:17.541309   24174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 10:47:17.541424   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:17.541438   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283 minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=true
	I0722 10:47:17.626544   24174 ops.go:34] apiserver oom_adj: -16
	I0722 10:47:17.739150   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:18.239341   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:18.739971   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:19.239223   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:19.739929   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:20.239205   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:20.740153   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:21.239728   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:21.739417   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:22.239578   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:22.739898   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:23.239783   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:23.739199   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:24.239890   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:24.739999   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:25.240078   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:25.739538   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:26.239601   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:26.740161   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:27.239192   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:27.739566   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:28.239592   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:28.739245   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.239826   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.740137   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.853675   24174 kubeadm.go:1113] duration metric: took 12.312302949s to wait for elevateKubeSystemPrivileges
	I0722 10:47:29.853709   24174 kubeadm.go:394] duration metric: took 23.808652025s to StartCluster
	I0722 10:47:29.853731   24174 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:29.853815   24174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:47:29.854481   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:29.854675   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 10:47:29.854683   24174 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:47:29.854700   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:47:29.854707   24174 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 10:47:29.854750   24174 addons.go:69] Setting storage-provisioner=true in profile "ha-461283"
	I0722 10:47:29.854764   24174 addons.go:69] Setting default-storageclass=true in profile "ha-461283"
	I0722 10:47:29.854785   24174 addons.go:234] Setting addon storage-provisioner=true in "ha-461283"
	I0722 10:47:29.854799   24174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-461283"
	I0722 10:47:29.854826   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:29.854882   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:29.855183   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.855192   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.855221   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.855225   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.870374   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I0722 10:47:29.870374   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0722 10:47:29.870774   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.870902   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.871418   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.871433   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.871552   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.871574   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.871790   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.871884   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.871982   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.872425   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.872468   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.874094   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:47:29.874430   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 10:47:29.875043   24174 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 10:47:29.875248   24174 addons.go:234] Setting addon default-storageclass=true in "ha-461283"
	I0722 10:47:29.875292   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:29.875696   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.875782   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.888593   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I0722 10:47:29.889131   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.889610   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.889636   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.889944   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.890112   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.890380   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0722 10:47:29.890699   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.891177   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.891201   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.891515   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.891823   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:29.892091   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.892116   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.893436   24174 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 10:47:29.894507   24174 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:47:29.894528   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 10:47:29.894545   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:29.897345   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.897750   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:29.897784   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.897894   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:29.898065   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:29.898214   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:29.898369   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:29.907642   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0722 10:47:29.908113   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.908649   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.908672   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.908961   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.909124   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.910601   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:29.910788   24174 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 10:47:29.910802   24174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 10:47:29.910813   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:29.913649   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.914042   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:29.914068   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.914202   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:29.914360   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:29.914541   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:29.914672   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:30.053761   24174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:47:30.063293   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 10:47:30.073055   24174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 10:47:30.758246   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758275   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758283   24174 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0722 10:47:30.758365   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758384   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758557   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.758600   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758606   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758609   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.758621   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.758630   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.758640   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758702   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758730   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758756   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758986   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758999   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.759015   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.759047   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.759084   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.759168   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.759215   24174 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0722 10:47:30.759230   24174 round_trippers.go:469] Request Headers:
	I0722 10:47:30.759241   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:47:30.759249   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:47:30.768796   24174 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 10:47:30.769486   24174 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0722 10:47:30.769502   24174 round_trippers.go:469] Request Headers:
	I0722 10:47:30.769513   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:47:30.769520   24174 round_trippers.go:473]     Content-Type: application/json
	I0722 10:47:30.769526   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:47:30.779730   24174 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 10:47:30.779913   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.779930   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.780192   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.780201   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.780210   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.781798   24174 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 10:47:30.783086   24174 addons.go:510] duration metric: took 928.374319ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 10:47:30.783124   24174 start.go:246] waiting for cluster config update ...
	I0722 10:47:30.783139   24174 start.go:255] writing updated cluster config ...
	I0722 10:47:30.784700   24174 out.go:177] 
	I0722 10:47:30.786021   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:30.786099   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:30.787696   24174 out.go:177] * Starting "ha-461283-m02" control-plane node in "ha-461283" cluster
	I0722 10:47:30.788917   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:47:30.788938   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:47:30.789021   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:47:30.789033   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:47:30.789107   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:30.789324   24174 start.go:360] acquireMachinesLock for ha-461283-m02: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:47:30.789373   24174 start.go:364] duration metric: took 28.905µs to acquireMachinesLock for "ha-461283-m02"
	I0722 10:47:30.789395   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:47:30.789475   24174 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0722 10:47:30.790912   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:47:30.790995   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:30.791017   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:30.809793   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0722 10:47:30.810272   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:30.810808   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:30.810835   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:30.811186   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:30.811360   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:30.811512   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:30.811655   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:47:30.811681   24174 client.go:168] LocalClient.Create starting
	I0722 10:47:30.811713   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:47:30.811753   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:47:30.811772   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:47:30.811832   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:47:30.811856   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:47:30.811890   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:47:30.811914   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:47:30.811925   24174 main.go:141] libmachine: (ha-461283-m02) Calling .PreCreateCheck
	I0722 10:47:30.812066   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:30.812481   24174 main.go:141] libmachine: Creating machine...
	I0722 10:47:30.812494   24174 main.go:141] libmachine: (ha-461283-m02) Calling .Create
	I0722 10:47:30.812621   24174 main.go:141] libmachine: (ha-461283-m02) Creating KVM machine...
	I0722 10:47:30.813690   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found existing default KVM network
	I0722 10:47:30.813811   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found existing private KVM network mk-ha-461283
	I0722 10:47:30.813956   24174 main.go:141] libmachine: (ha-461283-m02) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 ...
	I0722 10:47:30.813977   24174 main.go:141] libmachine: (ha-461283-m02) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:47:30.814022   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:30.813934   24559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:47:30.814143   24174 main.go:141] libmachine: (ha-461283-m02) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:47:31.053571   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.053413   24559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa...
	I0722 10:47:31.215683   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.215590   24559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/ha-461283-m02.rawdisk...
	I0722 10:47:31.215720   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Writing magic tar header
	I0722 10:47:31.215731   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Writing SSH key tar header
	I0722 10:47:31.215811   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.215737   24559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 ...
	I0722 10:47:31.215875   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02
	I0722 10:47:31.215902   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 (perms=drwx------)
	I0722 10:47:31.215919   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:47:31.215934   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:47:31.215949   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:47:31.215962   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:47:31.215974   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:47:31.215983   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:47:31.216071   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:47:31.216107   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home
	I0722 10:47:31.216118   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:47:31.216125   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:47:31.216136   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:47:31.216151   24174 main.go:141] libmachine: (ha-461283-m02) Creating domain...
	I0722 10:47:31.216164   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Skipping /home - not owner
	I0722 10:47:31.216977   24174 main.go:141] libmachine: (ha-461283-m02) define libvirt domain using xml: 
	I0722 10:47:31.216992   24174 main.go:141] libmachine: (ha-461283-m02) <domain type='kvm'>
	I0722 10:47:31.217001   24174 main.go:141] libmachine: (ha-461283-m02)   <name>ha-461283-m02</name>
	I0722 10:47:31.217009   24174 main.go:141] libmachine: (ha-461283-m02)   <memory unit='MiB'>2200</memory>
	I0722 10:47:31.217028   24174 main.go:141] libmachine: (ha-461283-m02)   <vcpu>2</vcpu>
	I0722 10:47:31.217039   24174 main.go:141] libmachine: (ha-461283-m02)   <features>
	I0722 10:47:31.217048   24174 main.go:141] libmachine: (ha-461283-m02)     <acpi/>
	I0722 10:47:31.217057   24174 main.go:141] libmachine: (ha-461283-m02)     <apic/>
	I0722 10:47:31.217065   24174 main.go:141] libmachine: (ha-461283-m02)     <pae/>
	I0722 10:47:31.217078   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217091   24174 main.go:141] libmachine: (ha-461283-m02)   </features>
	I0722 10:47:31.217101   24174 main.go:141] libmachine: (ha-461283-m02)   <cpu mode='host-passthrough'>
	I0722 10:47:31.217110   24174 main.go:141] libmachine: (ha-461283-m02)   
	I0722 10:47:31.217114   24174 main.go:141] libmachine: (ha-461283-m02)   </cpu>
	I0722 10:47:31.217120   24174 main.go:141] libmachine: (ha-461283-m02)   <os>
	I0722 10:47:31.217124   24174 main.go:141] libmachine: (ha-461283-m02)     <type>hvm</type>
	I0722 10:47:31.217130   24174 main.go:141] libmachine: (ha-461283-m02)     <boot dev='cdrom'/>
	I0722 10:47:31.217147   24174 main.go:141] libmachine: (ha-461283-m02)     <boot dev='hd'/>
	I0722 10:47:31.217161   24174 main.go:141] libmachine: (ha-461283-m02)     <bootmenu enable='no'/>
	I0722 10:47:31.217170   24174 main.go:141] libmachine: (ha-461283-m02)   </os>
	I0722 10:47:31.217176   24174 main.go:141] libmachine: (ha-461283-m02)   <devices>
	I0722 10:47:31.217187   24174 main.go:141] libmachine: (ha-461283-m02)     <disk type='file' device='cdrom'>
	I0722 10:47:31.217197   24174 main.go:141] libmachine: (ha-461283-m02)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/boot2docker.iso'/>
	I0722 10:47:31.217210   24174 main.go:141] libmachine: (ha-461283-m02)       <target dev='hdc' bus='scsi'/>
	I0722 10:47:31.217217   24174 main.go:141] libmachine: (ha-461283-m02)       <readonly/>
	I0722 10:47:31.217228   24174 main.go:141] libmachine: (ha-461283-m02)     </disk>
	I0722 10:47:31.217239   24174 main.go:141] libmachine: (ha-461283-m02)     <disk type='file' device='disk'>
	I0722 10:47:31.217273   24174 main.go:141] libmachine: (ha-461283-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:47:31.217309   24174 main.go:141] libmachine: (ha-461283-m02)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/ha-461283-m02.rawdisk'/>
	I0722 10:47:31.217330   24174 main.go:141] libmachine: (ha-461283-m02)       <target dev='hda' bus='virtio'/>
	I0722 10:47:31.217345   24174 main.go:141] libmachine: (ha-461283-m02)     </disk>
	I0722 10:47:31.217357   24174 main.go:141] libmachine: (ha-461283-m02)     <interface type='network'>
	I0722 10:47:31.217368   24174 main.go:141] libmachine: (ha-461283-m02)       <source network='mk-ha-461283'/>
	I0722 10:47:31.217378   24174 main.go:141] libmachine: (ha-461283-m02)       <model type='virtio'/>
	I0722 10:47:31.217387   24174 main.go:141] libmachine: (ha-461283-m02)     </interface>
	I0722 10:47:31.217399   24174 main.go:141] libmachine: (ha-461283-m02)     <interface type='network'>
	I0722 10:47:31.217409   24174 main.go:141] libmachine: (ha-461283-m02)       <source network='default'/>
	I0722 10:47:31.217419   24174 main.go:141] libmachine: (ha-461283-m02)       <model type='virtio'/>
	I0722 10:47:31.217431   24174 main.go:141] libmachine: (ha-461283-m02)     </interface>
	I0722 10:47:31.217442   24174 main.go:141] libmachine: (ha-461283-m02)     <serial type='pty'>
	I0722 10:47:31.217454   24174 main.go:141] libmachine: (ha-461283-m02)       <target port='0'/>
	I0722 10:47:31.217464   24174 main.go:141] libmachine: (ha-461283-m02)     </serial>
	I0722 10:47:31.217472   24174 main.go:141] libmachine: (ha-461283-m02)     <console type='pty'>
	I0722 10:47:31.217483   24174 main.go:141] libmachine: (ha-461283-m02)       <target type='serial' port='0'/>
	I0722 10:47:31.217493   24174 main.go:141] libmachine: (ha-461283-m02)     </console>
	I0722 10:47:31.217502   24174 main.go:141] libmachine: (ha-461283-m02)     <rng model='virtio'>
	I0722 10:47:31.217518   24174 main.go:141] libmachine: (ha-461283-m02)       <backend model='random'>/dev/random</backend>
	I0722 10:47:31.217529   24174 main.go:141] libmachine: (ha-461283-m02)     </rng>
	I0722 10:47:31.217539   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217547   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217558   24174 main.go:141] libmachine: (ha-461283-m02)   </devices>
	I0722 10:47:31.217569   24174 main.go:141] libmachine: (ha-461283-m02) </domain>
	I0722 10:47:31.217577   24174 main.go:141] libmachine: (ha-461283-m02) 
	I0722 10:47:31.223742   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:2e:15:a4 in network default
	I0722 10:47:31.224298   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring networks are active...
	I0722 10:47:31.224329   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:31.225166   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring network default is active
	I0722 10:47:31.225485   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring network mk-ha-461283 is active
	I0722 10:47:31.225842   24174 main.go:141] libmachine: (ha-461283-m02) Getting domain xml...
	I0722 10:47:31.226695   24174 main.go:141] libmachine: (ha-461283-m02) Creating domain...
	I0722 10:47:32.436447   24174 main.go:141] libmachine: (ha-461283-m02) Waiting to get IP...
	I0722 10:47:32.437487   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:32.437934   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:32.437982   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:32.437904   24559 retry.go:31] will retry after 288.868303ms: waiting for machine to come up
	I0722 10:47:32.728315   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:32.728764   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:32.728790   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:32.728717   24559 retry.go:31] will retry after 378.239876ms: waiting for machine to come up
	I0722 10:47:33.108293   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:33.108869   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:33.108900   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:33.108798   24559 retry.go:31] will retry after 413.894738ms: waiting for machine to come up
	I0722 10:47:33.524142   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:33.524580   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:33.524608   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:33.524547   24559 retry.go:31] will retry after 555.748732ms: waiting for machine to come up
	I0722 10:47:34.082284   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:34.082731   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:34.082761   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:34.082690   24559 retry.go:31] will retry after 731.862289ms: waiting for machine to come up
	I0722 10:47:34.816601   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:34.817015   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:34.817044   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:34.816977   24559 retry.go:31] will retry after 770.464616ms: waiting for machine to come up
	I0722 10:47:35.588905   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:35.589391   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:35.589420   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:35.589332   24559 retry.go:31] will retry after 873.256858ms: waiting for machine to come up
	I0722 10:47:36.464080   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:36.464468   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:36.464495   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:36.464429   24559 retry.go:31] will retry after 1.402422875s: waiting for machine to come up
	I0722 10:47:37.868851   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:37.869255   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:37.869311   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:37.869226   24559 retry.go:31] will retry after 1.689037725s: waiting for machine to come up
	I0722 10:47:39.559985   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:39.560442   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:39.560496   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:39.560401   24559 retry.go:31] will retry after 1.943562609s: waiting for machine to come up
	I0722 10:47:41.505107   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:41.505555   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:41.505584   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:41.505507   24559 retry.go:31] will retry after 1.896819693s: waiting for machine to come up
	I0722 10:47:43.403486   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:43.403863   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:43.403905   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:43.403826   24559 retry.go:31] will retry after 2.894977506s: waiting for machine to come up
	I0722 10:47:46.300078   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:46.300472   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:46.300499   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:46.300430   24559 retry.go:31] will retry after 3.384903237s: waiting for machine to come up
	I0722 10:47:49.688927   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:49.689333   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:49.689359   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:49.689311   24559 retry.go:31] will retry after 5.437630652s: waiting for machine to come up
	I0722 10:47:55.132136   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.132633   24174 main.go:141] libmachine: (ha-461283-m02) Found IP for machine: 192.168.39.207
	I0722 10:47:55.132653   24174 main.go:141] libmachine: (ha-461283-m02) Reserving static IP address...
	I0722 10:47:55.132683   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has current primary IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.132979   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find host DHCP lease matching {name: "ha-461283-m02", mac: "52:54:00:a7:59:21", ip: "192.168.39.207"} in network mk-ha-461283
	I0722 10:47:55.200912   24174 main.go:141] libmachine: (ha-461283-m02) Reserved static IP address: 192.168.39.207
	I0722 10:47:55.200942   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Getting to WaitForSSH function...
	I0722 10:47:55.200950   24174 main.go:141] libmachine: (ha-461283-m02) Waiting for SSH to be available...
	I0722 10:47:55.203647   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.204124   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.204153   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.204285   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using SSH client type: external
	I0722 10:47:55.204304   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa (-rw-------)
	I0722 10:47:55.204335   24174 main.go:141] libmachine: (ha-461283-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:47:55.204346   24174 main.go:141] libmachine: (ha-461283-m02) DBG | About to run SSH command:
	I0722 10:47:55.204355   24174 main.go:141] libmachine: (ha-461283-m02) DBG | exit 0
	I0722 10:47:55.336397   24174 main.go:141] libmachine: (ha-461283-m02) DBG | SSH cmd err, output: <nil>: 
	I0722 10:47:55.336658   24174 main.go:141] libmachine: (ha-461283-m02) KVM machine creation complete!
	I0722 10:47:55.337055   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:55.337646   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:55.337831   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:55.338013   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:47:55.338028   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:47:55.339291   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:47:55.339307   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:47:55.339315   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:47:55.339323   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.341274   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.341603   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.341630   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.341766   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.341921   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.342054   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.342173   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.342322   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.342508   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.342521   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:47:55.451331   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:47:55.451352   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:47:55.451362   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.454013   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.454340   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.454367   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.454486   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.454653   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.454804   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.454945   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.455100   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.455300   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.455316   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:47:55.568915   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:47:55.568993   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:47:55.569006   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:47:55.569016   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.569242   24174 buildroot.go:166] provisioning hostname "ha-461283-m02"
	I0722 10:47:55.569279   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.569456   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.572113   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.572438   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.572473   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.572633   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.572799   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.572944   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.573063   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.573178   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.573346   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.573357   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283-m02 && echo "ha-461283-m02" | sudo tee /etc/hostname
	I0722 10:47:55.699778   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283-m02
	
	I0722 10:47:55.699804   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.702298   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.702649   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.702682   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.702857   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.703007   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.703129   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.703262   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.703472   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.703679   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.703696   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:47:55.826649   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:47:55.826674   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:47:55.826688   24174 buildroot.go:174] setting up certificates
	I0722 10:47:55.826697   24174 provision.go:84] configureAuth start
	I0722 10:47:55.826704   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.826918   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:55.829420   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.829755   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.829778   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.829941   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.831732   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.831950   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.831979   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.832071   24174 provision.go:143] copyHostCerts
	I0722 10:47:55.832099   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:47:55.832138   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:47:55.832150   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:47:55.832224   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:47:55.832367   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:47:55.832405   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:47:55.832415   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:47:55.832455   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:47:55.832504   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:47:55.832520   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:47:55.832526   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:47:55.832550   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:47:55.832600   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283-m02 san=[127.0.0.1 192.168.39.207 ha-461283-m02 localhost minikube]
	I0722 10:47:55.977172   24174 provision.go:177] copyRemoteCerts
	I0722 10:47:55.977222   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:47:55.977240   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.979482   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.979780   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.979802   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.980017   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.980213   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.980399   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.980536   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.066264   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:47:56.066328   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:47:56.093525   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:47:56.093586   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:47:56.117413   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:47:56.117466   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:47:56.140595   24174 provision.go:87] duration metric: took 313.886457ms to configureAuth
	I0722 10:47:56.140619   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:47:56.140767   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:56.140832   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.143335   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.143698   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.143720   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.143924   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.144091   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.144255   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.144375   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.144547   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:56.144729   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:56.144746   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:47:56.435279   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:47:56.435307   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:47:56.435317   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetURL
	I0722 10:47:56.436836   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using libvirt version 6000000
	I0722 10:47:56.439630   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.440017   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.440039   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.440229   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:47:56.440245   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:47:56.440252   24174 client.go:171] duration metric: took 25.62856269s to LocalClient.Create
	I0722 10:47:56.440274   24174 start.go:167] duration metric: took 25.628621079s to libmachine.API.Create "ha-461283"
	I0722 10:47:56.440281   24174 start.go:293] postStartSetup for "ha-461283-m02" (driver="kvm2")
	I0722 10:47:56.440291   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:47:56.440316   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.440572   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:47:56.440592   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.442760   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.443071   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.443089   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.443242   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.443419   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.443584   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.443733   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.531078   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:47:56.535593   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:47:56.535623   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:47:56.535718   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:47:56.535867   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:47:56.535882   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:47:56.536006   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:47:56.544961   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:56.569042   24174 start.go:296] duration metric: took 128.750355ms for postStartSetup
	I0722 10:47:56.569083   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:56.569663   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:56.572431   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.572805   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.572833   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.573025   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:56.573231   24174 start.go:128] duration metric: took 25.783745658s to createHost
	I0722 10:47:56.573252   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.575374   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.575698   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.575729   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.575869   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.576111   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.576293   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.576435   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.576596   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:56.576743   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:56.576753   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:47:56.688833   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645276.666438491
	
	I0722 10:47:56.688860   24174 fix.go:216] guest clock: 1721645276.666438491
	I0722 10:47:56.688871   24174 fix.go:229] Guest: 2024-07-22 10:47:56.666438491 +0000 UTC Remote: 2024-07-22 10:47:56.573243102 +0000 UTC m=+78.419788115 (delta=93.195389ms)
	I0722 10:47:56.688895   24174 fix.go:200] guest clock delta is within tolerance: 93.195389ms
	I0722 10:47:56.688906   24174 start.go:83] releasing machines lock for "ha-461283-m02", held for 25.899520813s
	I0722 10:47:56.688934   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.689186   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:56.691616   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.691947   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.691967   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.694066   24174 out.go:177] * Found network options:
	I0722 10:47:56.695515   24174 out.go:177]   - NO_PROXY=192.168.39.43
	W0722 10:47:56.696822   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:47:56.696863   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697471   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697647   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697743   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:47:56.697784   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	W0722 10:47:56.697811   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:47:56.697891   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:47:56.697911   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.700410   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700648   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700725   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.700748   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700879   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.700998   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.701022   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.701027   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.701180   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.701211   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.701349   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.701360   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.701517   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.701627   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.948845   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:47:56.956269   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:47:56.956331   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:47:56.973374   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:47:56.973391   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:47:56.973435   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:47:56.992989   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:47:57.009902   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:47:57.009961   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:47:57.025982   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:47:57.039149   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:47:57.149910   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:47:57.330760   24174 docker.go:233] disabling docker service ...
	I0722 10:47:57.330834   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:47:57.344563   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:47:57.357536   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:47:57.475780   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:47:57.594265   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:47:57.609478   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:47:57.627377   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:47:57.627437   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.637202   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:47:57.637266   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.647096   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.656843   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.666504   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:47:57.676452   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.687046   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.703812   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.713671   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:47:57.722303   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:47:57.722349   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:47:57.734915   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:47:57.743900   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:57.855952   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:47:58.001609   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:47:58.001687   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:47:58.006322   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:47:58.006370   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:47:58.010146   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:47:58.050516   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:47:58.050584   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:58.079421   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:58.109795   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:47:58.111043   24174 out.go:177]   - env NO_PROXY=192.168.39.43
	I0722 10:47:58.112290   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:58.114875   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:58.115259   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:58.115281   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:58.115505   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:47:58.119902   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:58.132811   24174 mustload.go:65] Loading cluster: ha-461283
	I0722 10:47:58.133021   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:58.133298   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:58.133321   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:58.147456   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0722 10:47:58.147842   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:58.148301   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:58.148323   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:58.148580   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:58.148755   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:58.150177   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:58.150449   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:58.150473   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:58.164905   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0722 10:47:58.165296   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:58.165714   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:58.165733   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:58.166057   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:58.166245   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:58.166410   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.207
	I0722 10:47:58.166422   24174 certs.go:194] generating shared ca certs ...
	I0722 10:47:58.166437   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.166581   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:47:58.166637   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:47:58.166650   24174 certs.go:256] generating profile certs ...
	I0722 10:47:58.166742   24174 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:47:58.166772   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc
	I0722 10:47:58.166791   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.254]
	I0722 10:47:58.429254   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc ...
	I0722 10:47:58.429281   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc: {Name:mk8a97d59811d83ad3be1c8b591fda17bff6b927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.429437   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc ...
	I0722 10:47:58.429449   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc: {Name:mk595f26bd56e36f899c39440569455e9ebee967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.429522   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:47:58.429645   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:47:58.429766   24174 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:47:58.429781   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:47:58.429792   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:47:58.429805   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:47:58.429817   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:47:58.429829   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:47:58.429841   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:47:58.429852   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:47:58.429862   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:47:58.429916   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:47:58.429942   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:47:58.429951   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:47:58.429972   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:47:58.429992   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:47:58.430011   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:47:58.430045   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:58.430069   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.430082   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:58.430095   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:47:58.430123   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:58.432873   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:58.433194   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:58.433234   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:58.433383   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:58.433570   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:58.433710   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:58.433814   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:58.504804   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 10:47:58.511555   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 10:47:58.522897   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 10:47:58.526822   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0722 10:47:58.536795   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 10:47:58.541569   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 10:47:58.551799   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 10:47:58.555654   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 10:47:58.566538   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 10:47:58.570378   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 10:47:58.580400   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 10:47:58.584220   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0722 10:47:58.594107   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:47:58.620357   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:47:58.643801   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:47:58.665955   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:47:58.689217   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 10:47:58.713811   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:47:58.737216   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:47:58.760447   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:47:58.784915   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:47:58.808121   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:47:58.830169   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:47:58.852391   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 10:47:58.868323   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0722 10:47:58.884320   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 10:47:58.899981   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 10:47:58.915490   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 10:47:58.931428   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0722 10:47:58.946940   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 10:47:58.962309   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:47:58.968094   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:47:58.978946   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.983292   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.983337   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.989083   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:47:59.000666   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:47:59.011239   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.015698   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.015755   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.021600   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:47:59.032920   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:47:59.045008   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.049309   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.049358   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.054801   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:47:59.065399   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:47:59.069327   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:47:59.069381   24174 kubeadm.go:934] updating node {m02 192.168.39.207 8443 v1.30.3 crio true true} ...
	I0722 10:47:59.069465   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:47:59.069491   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:47:59.069525   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:47:59.086122   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:47:59.086186   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:47:59.086228   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:59.095586   24174 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 10:47:59.095645   24174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:59.104618   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 10:47:59.104642   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:47:59.104708   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:47:59.104733   24174 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0722 10:47:59.104767   24174 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0722 10:47:59.108710   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 10:47:59.108735   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 10:47:59.745705   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:47:59.745789   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:47:59.751699   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 10:47:59.751726   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 10:48:00.580026   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:48:00.596841   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:48:00.596944   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:48:00.601828   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 10:48:00.601861   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 10:48:01.011204   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 10:48:01.020993   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:48:01.037349   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:48:01.053880   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:48:01.069804   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:48:01.073484   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:48:01.085558   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:48:01.205840   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:48:01.222485   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:48:01.222954   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:48:01.222989   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:48:01.238394   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0722 10:48:01.238873   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:48:01.239358   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:48:01.239385   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:48:01.239718   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:48:01.239938   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:48:01.240150   24174 start.go:317] joinCluster: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:48:01.240274   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 10:48:01.240300   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:48:01.243159   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:48:01.243499   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:48:01.243525   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:48:01.243693   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:48:01.243866   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:48:01.244131   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:48:01.244331   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:48:01.411425   24174 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:01.411471   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ypay92.mav1gf1d3e8n4m1h --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m02 --control-plane --apiserver-advertise-address=192.168.39.207 --apiserver-bind-port=8443"
	I0722 10:48:24.691472   24174 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ypay92.mav1gf1d3e8n4m1h --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m02 --control-plane --apiserver-advertise-address=192.168.39.207 --apiserver-bind-port=8443": (23.279975288s)
	I0722 10:48:24.691512   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 10:48:25.300884   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283-m02 minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=false
	I0722 10:48:25.436971   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-461283-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 10:48:25.550787   24174 start.go:319] duration metric: took 24.310634091s to joinCluster
	I0722 10:48:25.550873   24174 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:25.551125   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:48:25.552212   24174 out.go:177] * Verifying Kubernetes components...
	I0722 10:48:25.553610   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:48:25.799483   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:48:25.843019   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:48:25.843284   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 10:48:25.843363   24174 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.43:8443
	I0722 10:48:25.843574   24174 node_ready.go:35] waiting up to 6m0s for node "ha-461283-m02" to be "Ready" ...
	I0722 10:48:25.843644   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:25.843652   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:25.843659   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:25.843662   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:25.863242   24174 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0722 10:48:26.344774   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:26.344800   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:26.344811   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:26.344817   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:26.353893   24174 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 10:48:26.843989   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:26.844014   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:26.844023   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:26.844028   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:26.852768   24174 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 10:48:27.344744   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:27.344763   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:27.344770   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:27.344775   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:27.350729   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:27.844036   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:27.844059   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:27.844068   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:27.844073   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:27.847268   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:27.848029   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:28.343700   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:28.343720   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:28.343730   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:28.343734   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:28.346747   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:28.844668   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:28.844693   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:28.844703   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:28.844709   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:28.847359   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:29.344402   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:29.344422   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:29.344429   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:29.344434   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:29.347445   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:29.844245   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:29.844267   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:29.844279   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:29.844286   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:29.846563   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:30.343961   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:30.343985   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:30.343995   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:30.344002   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:30.346955   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:30.347453   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:30.843716   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:30.843734   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:30.843741   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:30.843744   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:30.846470   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:31.344014   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:31.344036   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:31.344047   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:31.344051   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:31.347040   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:31.844063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:31.844083   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:31.844091   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:31.844095   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:31.846983   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:32.343831   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:32.343855   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:32.343862   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:32.343866   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:32.347186   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:32.347771   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:32.844046   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:32.844068   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:32.844076   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:32.844081   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:32.848142   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:48:33.344485   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:33.344516   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:33.344523   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:33.344527   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:33.348125   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:33.844085   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:33.844111   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:33.844123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:33.844130   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:33.846798   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:34.343783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:34.343805   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:34.343816   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:34.343823   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:34.346974   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:34.347904   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:34.844249   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:34.844270   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:34.844278   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:34.844281   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:34.847398   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:35.344451   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:35.344473   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:35.344481   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:35.344484   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:35.347665   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:35.844077   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:35.844102   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:35.844114   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:35.844118   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:35.847177   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:36.344643   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:36.344665   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:36.344676   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:36.344681   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:36.348405   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:36.348982   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:36.844453   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:36.844474   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:36.844482   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:36.844486   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:36.848497   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:37.344572   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:37.344599   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:37.344610   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:37.344616   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:37.348269   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:37.844700   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:37.844723   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:37.844734   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:37.844740   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:37.847962   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:38.343890   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:38.343910   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:38.343918   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:38.343923   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:38.347069   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:38.844482   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:38.844507   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:38.844519   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:38.844527   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:38.847362   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:38.847891   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:39.344180   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.344205   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.344213   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.344218   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.347692   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:39.844660   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.844683   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.844692   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.844698   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.847829   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:39.848459   24174 node_ready.go:49] node "ha-461283-m02" has status "Ready":"True"
	I0722 10:48:39.848477   24174 node_ready.go:38] duration metric: took 14.004887367s for node "ha-461283-m02" to be "Ready" ...
	I0722 10:48:39.848485   24174 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:48:39.848534   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:39.848543   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.848550   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.848553   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.852902   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:48:39.859233   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.859290   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qrfdd
	I0722 10:48:39.859298   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.859306   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.859310   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.861613   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.862209   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.862223   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.862230   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.862234   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.864043   24174 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 10:48:39.864695   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.864709   24174 pod_ready.go:81] duration metric: took 5.457806ms for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.864716   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.864754   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zb547
	I0722 10:48:39.864761   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.864767   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.864770   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.867561   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.868547   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.868560   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.868567   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.868571   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.870916   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.871417   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.871431   24174 pod_ready.go:81] duration metric: took 6.70921ms for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.871438   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.871489   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283
	I0722 10:48:39.871500   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.871510   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.871515   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.873780   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.874369   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.874384   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.874393   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.874399   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.876544   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.877280   24174 pod_ready.go:92] pod "etcd-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.877293   24174 pod_ready.go:81] duration metric: took 5.849097ms for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.877299   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.877345   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:39.877354   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.877361   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.877364   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.879962   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.880946   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.880959   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.880968   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.880974   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.887680   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:48:40.377819   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:40.377850   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.377858   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.377865   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.381180   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:40.381693   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:40.381706   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.381715   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.381719   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.384712   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:40.878063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:40.878084   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.878092   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.878100   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.881150   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:40.881934   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:40.881948   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.881956   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.881959   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.884560   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:40.885088   24174 pod_ready.go:92] pod "etcd-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:40.885107   24174 pod_ready.go:81] duration metric: took 1.007801941s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:40.885127   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:40.885171   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:48:40.885178   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.885186   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.885189   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.887601   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:41.045448   24174 request.go:629] Waited for 157.314344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.045509   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.045517   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.045527   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.045546   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.048170   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:41.048948   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.048962   24174 pod_ready.go:81] duration metric: took 163.829366ms for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.048973   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.245382   24174 request.go:629] Waited for 196.340468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:48:41.245436   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:48:41.245443   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.245470   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.245476   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.248579   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.444648   24174 request.go:629] Waited for 195.12048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:41.444729   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:41.444736   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.444746   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.444753   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.448003   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.448735   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.448753   24174 pod_ready.go:81] duration metric: took 399.770264ms for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.448762   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.644986   24174 request.go:629] Waited for 196.107358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:48:41.645039   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:48:41.645046   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.645056   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.645064   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.648469   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.845599   24174 request.go:629] Waited for 196.436498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.845831   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.845844   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.845856   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.845868   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.850996   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:41.852097   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.852116   24174 pod_ready.go:81] duration metric: took 403.346955ms for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.852129   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.045145   24174 request.go:629] Waited for 192.95325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:48:42.045239   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:48:42.045251   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.045258   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.045264   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.047596   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.245453   24174 request.go:629] Waited for 197.350124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:42.245528   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:42.245539   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.245551   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.245559   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.248372   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.248862   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:42.248880   24174 pod_ready.go:81] duration metric: took 396.744128ms for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.248890   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.445033   24174 request.go:629] Waited for 196.085737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:48:42.445106   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:48:42.445116   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.445123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.445128   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.448498   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:42.645624   24174 request.go:629] Waited for 196.365494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:42.645673   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:42.645678   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.645685   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.645690   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.648527   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.649134   24174 pod_ready.go:92] pod "kube-proxy-28zxf" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:42.649151   24174 pod_ready.go:81] duration metric: took 400.253951ms for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.649160   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.845279   24174 request.go:629] Waited for 196.062558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:48:42.845384   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:48:42.845395   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.845406   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.845416   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.849246   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.044710   24174 request.go:629] Waited for 194.2934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.044777   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.044783   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.044790   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.044797   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.047731   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:43.048289   24174 pod_ready.go:92] pod "kube-proxy-xkbsx" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.048307   24174 pod_ready.go:81] duration metric: took 399.140003ms for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.048318   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.245697   24174 request.go:629] Waited for 197.316846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:48:43.245778   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:48:43.245788   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.245800   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.245811   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.249114   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.445283   24174 request.go:629] Waited for 195.497705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:43.445351   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:43.445359   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.445369   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.445374   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.448694   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.449520   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.449537   24174 pod_ready.go:81] duration metric: took 401.211193ms for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.449546   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.645724   24174 request.go:629] Waited for 196.109694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:48:43.645794   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:48:43.645802   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.645813   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.645822   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.649328   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.845450   24174 request.go:629] Waited for 195.380755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.845521   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.845528   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.845537   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.845543   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.848353   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:43.848987   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.849004   24174 pod_ready.go:81] duration metric: took 399.45262ms for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.849014   24174 pod_ready.go:38] duration metric: took 4.000520366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:48:43.849029   24174 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:48:43.849081   24174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:48:43.864186   24174 api_server.go:72] duration metric: took 18.313277926s to wait for apiserver process to appear ...
	I0722 10:48:43.864203   24174 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:48:43.864217   24174 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0722 10:48:43.868250   24174 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0722 10:48:43.868313   24174 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I0722 10:48:43.868324   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.868334   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.868345   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.869157   24174 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 10:48:43.869256   24174 api_server.go:141] control plane version: v1.30.3
	I0722 10:48:43.869274   24174 api_server.go:131] duration metric: took 5.065194ms to wait for apiserver health ...
	I0722 10:48:43.869284   24174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:48:44.045535   24174 request.go:629] Waited for 176.181322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.045588   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.045593   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.045601   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.045606   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.053011   24174 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 10:48:44.059851   24174 system_pods.go:59] 17 kube-system pods found
	I0722 10:48:44.059876   24174 system_pods.go:61] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:48:44.059881   24174 system_pods.go:61] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:48:44.059885   24174 system_pods.go:61] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:48:44.059888   24174 system_pods.go:61] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:48:44.059892   24174 system_pods.go:61] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:48:44.059895   24174 system_pods.go:61] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:48:44.059898   24174 system_pods.go:61] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:48:44.059901   24174 system_pods.go:61] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:48:44.059904   24174 system_pods.go:61] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:48:44.059907   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:48:44.059910   24174 system_pods.go:61] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:48:44.059913   24174 system_pods.go:61] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:48:44.059916   24174 system_pods.go:61] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:48:44.059919   24174 system_pods.go:61] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:48:44.059921   24174 system_pods.go:61] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:48:44.059926   24174 system_pods.go:61] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:48:44.059928   24174 system_pods.go:61] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:48:44.059933   24174 system_pods.go:74] duration metric: took 190.641674ms to wait for pod list to return data ...
	I0722 10:48:44.059943   24174 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:48:44.245377   24174 request.go:629] Waited for 185.370785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:48:44.245427   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:48:44.245432   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.245438   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.245442   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.248417   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:44.248661   24174 default_sa.go:45] found service account: "default"
	I0722 10:48:44.248679   24174 default_sa.go:55] duration metric: took 188.730585ms for default service account to be created ...
	I0722 10:48:44.248688   24174 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:48:44.444934   24174 request.go:629] Waited for 196.187287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.445012   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.445017   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.445025   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.445032   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.450361   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:44.457320   24174 system_pods.go:86] 17 kube-system pods found
	I0722 10:48:44.457343   24174 system_pods.go:89] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:48:44.457348   24174 system_pods.go:89] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:48:44.457353   24174 system_pods.go:89] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:48:44.457357   24174 system_pods.go:89] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:48:44.457361   24174 system_pods.go:89] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:48:44.457364   24174 system_pods.go:89] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:48:44.457369   24174 system_pods.go:89] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:48:44.457377   24174 system_pods.go:89] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:48:44.457385   24174 system_pods.go:89] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:48:44.457394   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:48:44.457401   24174 system_pods.go:89] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:48:44.457410   24174 system_pods.go:89] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:48:44.457414   24174 system_pods.go:89] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:48:44.457418   24174 system_pods.go:89] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:48:44.457421   24174 system_pods.go:89] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:48:44.457428   24174 system_pods.go:89] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:48:44.457431   24174 system_pods.go:89] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:48:44.457437   24174 system_pods.go:126] duration metric: took 208.742477ms to wait for k8s-apps to be running ...
	I0722 10:48:44.457446   24174 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:48:44.457492   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:48:44.472821   24174 system_svc.go:56] duration metric: took 15.367443ms WaitForService to wait for kubelet
	I0722 10:48:44.472846   24174 kubeadm.go:582] duration metric: took 18.921938085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:48:44.472866   24174 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:48:44.645244   24174 request.go:629] Waited for 172.313585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I0722 10:48:44.645304   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I0722 10:48:44.645324   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.645335   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.645340   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.648848   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:44.649597   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:48:44.649616   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:48:44.649629   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:48:44.649635   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:48:44.649640   24174 node_conditions.go:105] duration metric: took 176.768458ms to run NodePressure ...
	I0722 10:48:44.649654   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:48:44.649689   24174 start.go:255] writing updated cluster config ...
	I0722 10:48:44.652165   24174 out.go:177] 
	I0722 10:48:44.653480   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:48:44.653578   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:48:44.655144   24174 out.go:177] * Starting "ha-461283-m03" control-plane node in "ha-461283" cluster
	I0722 10:48:44.656272   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:48:44.656289   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:48:44.656371   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:48:44.656395   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:48:44.656479   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:48:44.656690   24174 start.go:360] acquireMachinesLock for ha-461283-m03: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:48:44.656729   24174 start.go:364] duration metric: took 22.177µs to acquireMachinesLock for "ha-461283-m03"
	I0722 10:48:44.656744   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:44.656824   24174 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0722 10:48:44.658312   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:48:44.658378   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:48:44.658409   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:48:44.672972   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0722 10:48:44.673379   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:48:44.673764   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:48:44.673784   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:48:44.674099   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:48:44.674280   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:48:44.674434   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:48:44.674583   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:48:44.674610   24174 client.go:168] LocalClient.Create starting
	I0722 10:48:44.674640   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:48:44.674673   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:48:44.674690   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:48:44.674753   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:48:44.674778   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:48:44.674791   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:48:44.674816   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:48:44.674827   24174 main.go:141] libmachine: (ha-461283-m03) Calling .PreCreateCheck
	I0722 10:48:44.674986   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:48:44.675313   24174 main.go:141] libmachine: Creating machine...
	I0722 10:48:44.675329   24174 main.go:141] libmachine: (ha-461283-m03) Calling .Create
	I0722 10:48:44.675457   24174 main.go:141] libmachine: (ha-461283-m03) Creating KVM machine...
	I0722 10:48:44.676646   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found existing default KVM network
	I0722 10:48:44.676771   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found existing private KVM network mk-ha-461283
	I0722 10:48:44.676899   24174 main.go:141] libmachine: (ha-461283-m03) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 ...
	I0722 10:48:44.676920   24174 main.go:141] libmachine: (ha-461283-m03) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:48:44.676981   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:44.676896   24968 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:48:44.677054   24174 main.go:141] libmachine: (ha-461283-m03) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:48:44.916618   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:44.916520   24968 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa...
	I0722 10:48:45.260636   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:45.260508   24968 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/ha-461283-m03.rawdisk...
	I0722 10:48:45.260676   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Writing magic tar header
	I0722 10:48:45.260692   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Writing SSH key tar header
	I0722 10:48:45.260705   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:45.260651   24968 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 ...
	I0722 10:48:45.260791   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03
	I0722 10:48:45.260830   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:48:45.260856   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 (perms=drwx------)
	I0722 10:48:45.260868   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:48:45.260885   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:48:45.260896   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:48:45.260909   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:48:45.260924   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home
	I0722 10:48:45.260937   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:48:45.260949   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Skipping /home - not owner
	I0722 10:48:45.260966   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:48:45.260981   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:48:45.260993   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:48:45.261006   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:48:45.261016   24174 main.go:141] libmachine: (ha-461283-m03) Creating domain...
	I0722 10:48:45.261834   24174 main.go:141] libmachine: (ha-461283-m03) define libvirt domain using xml: 
	I0722 10:48:45.261858   24174 main.go:141] libmachine: (ha-461283-m03) <domain type='kvm'>
	I0722 10:48:45.261870   24174 main.go:141] libmachine: (ha-461283-m03)   <name>ha-461283-m03</name>
	I0722 10:48:45.261879   24174 main.go:141] libmachine: (ha-461283-m03)   <memory unit='MiB'>2200</memory>
	I0722 10:48:45.261892   24174 main.go:141] libmachine: (ha-461283-m03)   <vcpu>2</vcpu>
	I0722 10:48:45.261902   24174 main.go:141] libmachine: (ha-461283-m03)   <features>
	I0722 10:48:45.261912   24174 main.go:141] libmachine: (ha-461283-m03)     <acpi/>
	I0722 10:48:45.261922   24174 main.go:141] libmachine: (ha-461283-m03)     <apic/>
	I0722 10:48:45.261936   24174 main.go:141] libmachine: (ha-461283-m03)     <pae/>
	I0722 10:48:45.261946   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.261970   24174 main.go:141] libmachine: (ha-461283-m03)   </features>
	I0722 10:48:45.261991   24174 main.go:141] libmachine: (ha-461283-m03)   <cpu mode='host-passthrough'>
	I0722 10:48:45.261998   24174 main.go:141] libmachine: (ha-461283-m03)   
	I0722 10:48:45.262007   24174 main.go:141] libmachine: (ha-461283-m03)   </cpu>
	I0722 10:48:45.262016   24174 main.go:141] libmachine: (ha-461283-m03)   <os>
	I0722 10:48:45.262026   24174 main.go:141] libmachine: (ha-461283-m03)     <type>hvm</type>
	I0722 10:48:45.262034   24174 main.go:141] libmachine: (ha-461283-m03)     <boot dev='cdrom'/>
	I0722 10:48:45.262041   24174 main.go:141] libmachine: (ha-461283-m03)     <boot dev='hd'/>
	I0722 10:48:45.262047   24174 main.go:141] libmachine: (ha-461283-m03)     <bootmenu enable='no'/>
	I0722 10:48:45.262053   24174 main.go:141] libmachine: (ha-461283-m03)   </os>
	I0722 10:48:45.262059   24174 main.go:141] libmachine: (ha-461283-m03)   <devices>
	I0722 10:48:45.262066   24174 main.go:141] libmachine: (ha-461283-m03)     <disk type='file' device='cdrom'>
	I0722 10:48:45.262074   24174 main.go:141] libmachine: (ha-461283-m03)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/boot2docker.iso'/>
	I0722 10:48:45.262082   24174 main.go:141] libmachine: (ha-461283-m03)       <target dev='hdc' bus='scsi'/>
	I0722 10:48:45.262090   24174 main.go:141] libmachine: (ha-461283-m03)       <readonly/>
	I0722 10:48:45.262094   24174 main.go:141] libmachine: (ha-461283-m03)     </disk>
	I0722 10:48:45.262123   24174 main.go:141] libmachine: (ha-461283-m03)     <disk type='file' device='disk'>
	I0722 10:48:45.262158   24174 main.go:141] libmachine: (ha-461283-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:48:45.262178   24174 main.go:141] libmachine: (ha-461283-m03)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/ha-461283-m03.rawdisk'/>
	I0722 10:48:45.262190   24174 main.go:141] libmachine: (ha-461283-m03)       <target dev='hda' bus='virtio'/>
	I0722 10:48:45.262201   24174 main.go:141] libmachine: (ha-461283-m03)     </disk>
	I0722 10:48:45.262212   24174 main.go:141] libmachine: (ha-461283-m03)     <interface type='network'>
	I0722 10:48:45.262228   24174 main.go:141] libmachine: (ha-461283-m03)       <source network='mk-ha-461283'/>
	I0722 10:48:45.262240   24174 main.go:141] libmachine: (ha-461283-m03)       <model type='virtio'/>
	I0722 10:48:45.262251   24174 main.go:141] libmachine: (ha-461283-m03)     </interface>
	I0722 10:48:45.262263   24174 main.go:141] libmachine: (ha-461283-m03)     <interface type='network'>
	I0722 10:48:45.262272   24174 main.go:141] libmachine: (ha-461283-m03)       <source network='default'/>
	I0722 10:48:45.262284   24174 main.go:141] libmachine: (ha-461283-m03)       <model type='virtio'/>
	I0722 10:48:45.262294   24174 main.go:141] libmachine: (ha-461283-m03)     </interface>
	I0722 10:48:45.262303   24174 main.go:141] libmachine: (ha-461283-m03)     <serial type='pty'>
	I0722 10:48:45.262318   24174 main.go:141] libmachine: (ha-461283-m03)       <target port='0'/>
	I0722 10:48:45.262329   24174 main.go:141] libmachine: (ha-461283-m03)     </serial>
	I0722 10:48:45.262340   24174 main.go:141] libmachine: (ha-461283-m03)     <console type='pty'>
	I0722 10:48:45.262353   24174 main.go:141] libmachine: (ha-461283-m03)       <target type='serial' port='0'/>
	I0722 10:48:45.262362   24174 main.go:141] libmachine: (ha-461283-m03)     </console>
	I0722 10:48:45.262375   24174 main.go:141] libmachine: (ha-461283-m03)     <rng model='virtio'>
	I0722 10:48:45.262386   24174 main.go:141] libmachine: (ha-461283-m03)       <backend model='random'>/dev/random</backend>
	I0722 10:48:45.262396   24174 main.go:141] libmachine: (ha-461283-m03)     </rng>
	I0722 10:48:45.262408   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.262429   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.262449   24174 main.go:141] libmachine: (ha-461283-m03)   </devices>
	I0722 10:48:45.262461   24174 main.go:141] libmachine: (ha-461283-m03) </domain>
	I0722 10:48:45.262470   24174 main.go:141] libmachine: (ha-461283-m03) 
	I0722 10:48:45.268874   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:3c:b5:d2 in network default
	I0722 10:48:45.269584   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:45.269612   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring networks are active...
	I0722 10:48:45.270240   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring network default is active
	I0722 10:48:45.270543   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring network mk-ha-461283 is active
	I0722 10:48:45.270958   24174 main.go:141] libmachine: (ha-461283-m03) Getting domain xml...
	I0722 10:48:45.271633   24174 main.go:141] libmachine: (ha-461283-m03) Creating domain...
	I0722 10:48:46.475752   24174 main.go:141] libmachine: (ha-461283-m03) Waiting to get IP...
	I0722 10:48:46.476626   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:46.477027   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:46.477056   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:46.477000   24968 retry.go:31] will retry after 275.121113ms: waiting for machine to come up
	I0722 10:48:46.753462   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:46.754036   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:46.754057   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:46.753902   24968 retry.go:31] will retry after 295.674602ms: waiting for machine to come up
	I0722 10:48:47.052238   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:47.052694   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:47.052724   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:47.052655   24968 retry.go:31] will retry after 451.913479ms: waiting for machine to come up
	I0722 10:48:47.506397   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:47.506876   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:47.506907   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:47.506809   24968 retry.go:31] will retry after 519.604109ms: waiting for machine to come up
	I0722 10:48:48.028482   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:48.028944   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:48.028974   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:48.028893   24968 retry.go:31] will retry after 476.957069ms: waiting for machine to come up
	I0722 10:48:48.507575   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:48.508072   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:48.508116   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:48.508042   24968 retry.go:31] will retry after 608.903487ms: waiting for machine to come up
	I0722 10:48:49.118665   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:49.119083   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:49.119108   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:49.119052   24968 retry.go:31] will retry after 889.181468ms: waiting for machine to come up
	I0722 10:48:50.009468   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:50.009937   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:50.009966   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:50.009893   24968 retry.go:31] will retry after 1.279479167s: waiting for machine to come up
	I0722 10:48:51.291228   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:51.291716   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:51.291745   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:51.291668   24968 retry.go:31] will retry after 1.661195322s: waiting for machine to come up
	I0722 10:48:52.955409   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:52.955765   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:52.955794   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:52.955713   24968 retry.go:31] will retry after 1.546832146s: waiting for machine to come up
	I0722 10:48:54.504366   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:54.504902   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:54.504944   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:54.504835   24968 retry.go:31] will retry after 2.353682552s: waiting for machine to come up
	I0722 10:48:56.861727   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:56.862178   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:56.862203   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:56.862133   24968 retry.go:31] will retry after 3.158413013s: waiting for machine to come up
	I0722 10:49:00.022502   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:00.023022   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:49:00.023045   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:49:00.022979   24968 retry.go:31] will retry after 3.932718421s: waiting for machine to come up
	I0722 10:49:03.957718   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:03.958092   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:49:03.958118   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:49:03.958056   24968 retry.go:31] will retry after 4.074630574s: waiting for machine to come up
	I0722 10:49:08.036477   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.037005   24174 main.go:141] libmachine: (ha-461283-m03) Found IP for machine: 192.168.39.127
	I0722 10:49:08.037024   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.037032   24174 main.go:141] libmachine: (ha-461283-m03) Reserving static IP address...
	I0722 10:49:08.037433   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find host DHCP lease matching {name: "ha-461283-m03", mac: "52:54:00:03:8f:df", ip: "192.168.39.127"} in network mk-ha-461283
	I0722 10:49:08.107902   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Getting to WaitForSSH function...
	I0722 10:49:08.107932   24174 main.go:141] libmachine: (ha-461283-m03) Reserved static IP address: 192.168.39.127
	I0722 10:49:08.107945   24174 main.go:141] libmachine: (ha-461283-m03) Waiting for SSH to be available...
	I0722 10:49:08.110233   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.110734   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.110759   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.110912   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using SSH client type: external
	I0722 10:49:08.110932   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa (-rw-------)
	I0722 10:49:08.110974   24174 main.go:141] libmachine: (ha-461283-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:49:08.110988   24174 main.go:141] libmachine: (ha-461283-m03) DBG | About to run SSH command:
	I0722 10:49:08.111022   24174 main.go:141] libmachine: (ha-461283-m03) DBG | exit 0
	I0722 10:49:08.240542   24174 main.go:141] libmachine: (ha-461283-m03) DBG | SSH cmd err, output: <nil>: 
	I0722 10:49:08.240825   24174 main.go:141] libmachine: (ha-461283-m03) KVM machine creation complete!
	I0722 10:49:08.241178   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:49:08.241676   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:08.241876   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:08.242060   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:49:08.242075   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:49:08.243399   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:49:08.243416   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:49:08.243423   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:49:08.243432   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.245715   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.246100   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.246127   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.246283   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.246461   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.246581   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.246695   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.246820   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.247047   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.247061   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:49:08.359512   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:49:08.359531   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:49:08.359538   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.362273   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.362612   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.362634   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.362798   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.362982   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.363160   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.363287   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.363455   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.363640   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.363659   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:49:08.477195   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:49:08.477266   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:49:08.477277   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:49:08.477291   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.477516   24174 buildroot.go:166] provisioning hostname "ha-461283-m03"
	I0722 10:49:08.477545   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.477754   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.480321   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.480780   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.480803   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.481023   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.481177   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.481306   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.481418   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.481557   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.481748   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.481762   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283-m03 && echo "ha-461283-m03" | sudo tee /etc/hostname
	I0722 10:49:08.606844   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283-m03
	
	I0722 10:49:08.606886   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.609767   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.610210   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.610239   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.610387   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.610594   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.610752   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.610913   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.611058   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.611216   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.611233   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:49:08.733722   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:49:08.733751   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:49:08.733790   24174 buildroot.go:174] setting up certificates
	I0722 10:49:08.733807   24174 provision.go:84] configureAuth start
	I0722 10:49:08.733826   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.734125   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:08.736480   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.736866   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.736892   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.737028   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.739129   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.739445   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.739470   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.739608   24174 provision.go:143] copyHostCerts
	I0722 10:49:08.739638   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:49:08.739666   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:49:08.739676   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:49:08.739738   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:49:08.739800   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:49:08.739817   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:49:08.739825   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:49:08.739852   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:49:08.739901   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:49:08.739917   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:49:08.739923   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:49:08.739943   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:49:08.739988   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283-m03 san=[127.0.0.1 192.168.39.127 ha-461283-m03 localhost minikube]
	I0722 10:49:08.820848   24174 provision.go:177] copyRemoteCerts
	I0722 10:49:08.820914   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:49:08.820941   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.823287   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.823642   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.823667   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.823889   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.824029   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.824188   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.824355   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:08.910528   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:49:08.910598   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:49:08.935860   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:49:08.935931   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:49:08.961307   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:49:08.961369   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:49:08.985321   24174 provision.go:87] duration metric: took 251.497465ms to configureAuth
	I0722 10:49:08.985347   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:49:08.985549   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:08.985628   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.988095   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.988340   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.988364   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.988597   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.988779   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.988937   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.989073   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.989195   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.989341   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.989360   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:49:09.280714   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:49:09.280741   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:49:09.280750   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetURL
	I0722 10:49:09.281926   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using libvirt version 6000000
	I0722 10:49:09.284425   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.284839   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.284889   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.285040   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:49:09.285052   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:49:09.285058   24174 client.go:171] duration metric: took 24.610441153s to LocalClient.Create
	I0722 10:49:09.285077   24174 start.go:167] duration metric: took 24.61049373s to libmachine.API.Create "ha-461283"
	I0722 10:49:09.285089   24174 start.go:293] postStartSetup for "ha-461283-m03" (driver="kvm2")
	I0722 10:49:09.285105   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:49:09.285124   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.285358   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:49:09.285386   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.287781   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.288195   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.288223   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.288361   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.288539   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.288690   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.288832   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.374634   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:49:09.378831   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:49:09.378853   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:49:09.378915   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:49:09.378979   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:49:09.378987   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:49:09.379068   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:49:09.389186   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:49:09.413193   24174 start.go:296] duration metric: took 128.08844ms for postStartSetup
	I0722 10:49:09.413234   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:49:09.413768   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:09.416467   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.416824   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.416852   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.417089   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:49:09.417279   24174 start.go:128] duration metric: took 24.760434681s to createHost
	I0722 10:49:09.417311   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.419757   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.420078   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.420105   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.420264   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.420458   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.420609   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.420749   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.420883   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:09.421073   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:09.421084   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:49:09.528822   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645349.505034760
	
	I0722 10:49:09.528841   24174 fix.go:216] guest clock: 1721645349.505034760
	I0722 10:49:09.528848   24174 fix.go:229] Guest: 2024-07-22 10:49:09.50503476 +0000 UTC Remote: 2024-07-22 10:49:09.41729795 +0000 UTC m=+151.263842966 (delta=87.73681ms)
	I0722 10:49:09.528862   24174 fix.go:200] guest clock delta is within tolerance: 87.73681ms
	I0722 10:49:09.528872   24174 start.go:83] releasing machines lock for "ha-461283-m03", held for 24.872130242s
	I0722 10:49:09.528889   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.529167   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:09.531836   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.532231   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.532260   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.534306   24174 out.go:177] * Found network options:
	I0722 10:49:09.535565   24174 out.go:177]   - NO_PROXY=192.168.39.43,192.168.39.207
	W0722 10:49:09.536739   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 10:49:09.536762   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:49:09.536783   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537363   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537535   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537627   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:49:09.537664   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	W0722 10:49:09.537741   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 10:49:09.537761   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:49:09.537821   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:49:09.537842   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.539945   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540294   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.540321   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540342   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540454   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.540630   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.540807   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.540813   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.540831   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540935   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.541008   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.541134   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.541287   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.541426   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.782542   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:49:09.789559   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:49:09.789624   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:49:09.805342   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:49:09.805366   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:49:09.805431   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:49:09.822372   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:49:09.835744   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:49:09.835792   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:49:09.848940   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:49:09.862003   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:49:09.986348   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:49:10.155950   24174 docker.go:233] disabling docker service ...
	I0722 10:49:10.156006   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:49:10.170158   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:49:10.182854   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:49:10.296909   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:49:10.406158   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:49:10.420189   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:49:10.438116   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:49:10.438178   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.448415   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:49:10.448476   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.458871   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.469518   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.479701   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:49:10.490060   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.501689   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.518496   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.530601   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:49:10.541551   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:49:10.541608   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:49:10.556668   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:49:10.567356   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:10.700055   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:49:10.843840   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:49:10.843920   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:49:10.848752   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:49:10.848801   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:49:10.852600   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:49:10.892773   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:49:10.892864   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:49:10.921241   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:49:10.950455   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:49:10.951626   24174 out.go:177]   - env NO_PROXY=192.168.39.43
	I0722 10:49:10.952757   24174 out.go:177]   - env NO_PROXY=192.168.39.43,192.168.39.207
	I0722 10:49:10.954000   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:10.956328   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:10.956698   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:10.956722   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:10.956922   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:49:10.961914   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:49:10.974396   24174 mustload.go:65] Loading cluster: ha-461283
	I0722 10:49:10.974575   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:10.974811   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:10.974850   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:10.991013   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0722 10:49:10.991418   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:10.991902   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:10.991922   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:10.992224   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:10.992441   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:49:10.993938   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:49:10.994219   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:10.994250   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:11.009575   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0722 10:49:11.009939   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:11.010337   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:11.010356   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:11.010651   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:11.010817   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:49:11.010962   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.127
	I0722 10:49:11.010973   24174 certs.go:194] generating shared ca certs ...
	I0722 10:49:11.010991   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.011122   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:49:11.011167   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:49:11.011176   24174 certs.go:256] generating profile certs ...
	I0722 10:49:11.011243   24174 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:49:11.011265   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6
	I0722 10:49:11.011278   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.127 192.168.39.254]
	I0722 10:49:11.449858   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 ...
	I0722 10:49:11.449891   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6: {Name:mk1acccb6e32b46331a2aec037f91e925bb70c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.450071   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6 ...
	I0722 10:49:11.450087   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6: {Name:mkc815b51982cb420308edd988d909dd01ec0f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.450166   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:49:11.450291   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:49:11.450418   24174 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:49:11.450434   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:49:11.450447   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:49:11.450462   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:49:11.450477   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:49:11.450492   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:49:11.450506   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:49:11.450520   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:49:11.450534   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:49:11.450585   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:49:11.450615   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:49:11.450625   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:49:11.450647   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:49:11.450671   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:49:11.450695   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:49:11.450735   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:49:11.450762   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:49:11.450778   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:49:11.450792   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:11.450824   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:49:11.453996   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:11.454437   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:49:11.454465   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:11.454585   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:49:11.454768   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:49:11.454935   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:49:11.455098   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:49:11.528707   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 10:49:11.534017   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 10:49:11.544709   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 10:49:11.548890   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0722 10:49:11.559654   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 10:49:11.563732   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 10:49:11.574079   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 10:49:11.578279   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 10:49:11.590284   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 10:49:11.594962   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 10:49:11.606237   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 10:49:11.610641   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0722 10:49:11.624774   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:49:11.652394   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:49:11.678403   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:49:11.703983   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:49:11.729402   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0722 10:49:11.752843   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:49:11.776177   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:49:11.799762   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:49:11.823974   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:49:11.849282   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:49:11.871220   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:49:11.893411   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 10:49:11.911137   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0722 10:49:11.928736   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 10:49:11.945859   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 10:49:11.962202   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 10:49:11.978598   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0722 10:49:11.995906   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 10:49:12.012711   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:49:12.018670   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:49:12.028738   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.032952   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.032997   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.038567   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:49:12.049963   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:49:12.061165   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.065930   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.065971   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.072079   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:49:12.082486   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:49:12.092554   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.096892   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.096935   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.102366   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:49:12.112504   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:49:12.116725   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:49:12.116776   24174 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.30.3 crio true true} ...
	I0722 10:49:12.116845   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:49:12.116868   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:49:12.116896   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:49:12.132845   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:49:12.132911   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:49:12.132962   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:49:12.142555   24174 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 10:49:12.142595   24174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 10:49:12.152419   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 10:49:12.152444   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:49:12.152451   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0722 10:49:12.152475   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0722 10:49:12.152491   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:49:12.152496   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:49:12.152512   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:49:12.152558   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:49:12.158250   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 10:49:12.158277   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 10:49:12.194641   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 10:49:12.194664   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:49:12.194682   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 10:49:12.194763   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:49:12.238431   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 10:49:12.238469   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 10:49:13.052480   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 10:49:13.061695   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:49:13.078693   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:49:13.095911   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:49:13.114238   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:49:13.118705   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:49:13.131082   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:13.268944   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:49:13.285635   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:49:13.285981   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:13.286030   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:13.302166   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0722 10:49:13.302525   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:13.302951   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:13.302971   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:13.303328   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:13.303498   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:49:13.303641   24174 start.go:317] joinCluster: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:49:13.303797   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 10:49:13.303817   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:49:13.306668   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:13.307257   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:49:13.307279   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:13.307436   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:49:13.307577   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:49:13.307744   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:49:13.307913   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:49:13.460830   24174 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:49:13.460879   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v2m5lg.582egtnlncp86dov --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0722 10:49:37.780469   24174 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v2m5lg.582egtnlncp86dov --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (24.319566133s)
	I0722 10:49:37.780510   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 10:49:38.407486   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283-m03 minikube.k8s.io/updated_at=2024_07_22T10_49_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=false
	I0722 10:49:38.528981   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-461283-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 10:49:38.644971   24174 start.go:319] duration metric: took 25.341327641s to joinCluster
	I0722 10:49:38.645043   24174 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:49:38.645355   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:38.646239   24174 out.go:177] * Verifying Kubernetes components...
	I0722 10:49:38.647507   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:38.912498   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:49:38.974546   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:49:38.974768   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 10:49:38.974823   24174 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.43:8443
	I0722 10:49:38.975036   24174 node_ready.go:35] waiting up to 6m0s for node "ha-461283-m03" to be "Ready" ...
	I0722 10:49:38.975119   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:38.975128   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:38.975135   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:38.975138   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:38.978489   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:39.475235   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:39.475259   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:39.475272   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:39.475278   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:39.479578   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:39.976259   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:39.976282   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:39.976294   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:39.976302   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:39.979733   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:40.475184   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:40.475203   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:40.475211   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:40.475216   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:40.479258   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:40.975741   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:40.975763   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:40.975773   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:40.975779   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:40.979651   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:40.980436   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:41.475913   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:41.475937   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:41.475947   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:41.475954   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:41.480780   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:41.976164   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:41.976188   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:41.976198   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:41.976203   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:41.979341   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:42.475264   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:42.475300   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:42.475309   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:42.475312   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:42.478872   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:42.975873   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:42.975896   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:42.975904   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:42.975907   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:42.979944   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:43.475598   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:43.475621   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:43.475627   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:43.475632   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:43.479075   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:43.479748   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:43.975810   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:43.975831   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:43.975842   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:43.975850   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:43.979384   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:44.476088   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:44.476112   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:44.476123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:44.476129   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:44.480188   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:44.975913   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:44.975933   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:44.975941   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:44.975945   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:44.979258   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:45.476118   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:45.476146   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:45.476155   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:45.476168   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:45.480099   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:45.480773   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:45.975573   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:45.975594   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:45.975603   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:45.975607   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:45.979283   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:46.475626   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:46.475657   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:46.475669   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:46.475673   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:46.480160   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:46.975996   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:46.976018   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:46.976026   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:46.976031   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:46.981084   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:49:47.475268   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:47.475294   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:47.475306   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:47.475311   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:47.478707   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:47.975836   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:47.975856   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:47.975866   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:47.975871   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:47.979275   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:47.980112   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:48.475457   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:48.475477   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:48.475485   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:48.475493   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:48.479131   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:48.976305   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:48.976327   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:48.976337   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:48.976343   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:48.980020   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:49.475301   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:49.475325   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:49.475336   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:49.475343   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:49.479220   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:49.975275   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:49.975296   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:49.975304   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:49.975308   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:49.978767   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:50.475603   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:50.475628   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:50.475638   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:50.475642   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:50.478903   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:50.479595   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:50.976185   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:50.976208   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:50.976218   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:50.976225   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:50.979573   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.475973   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:51.476000   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.476007   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.476013   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.479697   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.975307   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:51.975328   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.975336   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.975341   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.978674   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.979567   24174 node_ready.go:49] node "ha-461283-m03" has status "Ready":"True"
	I0722 10:49:51.979606   24174 node_ready.go:38] duration metric: took 13.004548385s for node "ha-461283-m03" to be "Ready" ...
	I0722 10:49:51.979617   24174 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:49:51.979693   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:51.979704   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.979714   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.979719   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.988241   24174 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 10:49:51.995547   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:51.995631   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qrfdd
	I0722 10:49:51.995639   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.995647   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.995653   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.998724   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.999389   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:51.999405   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.999412   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.999417   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.001964   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.002707   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.002733   24174 pod_ready.go:81] duration metric: took 7.158178ms for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.002745   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.002815   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zb547
	I0722 10:49:52.002826   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.002834   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.002851   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.006824   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.008042   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.008060   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.008070   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.008078   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.011406   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.011980   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.011998   24174 pod_ready.go:81] duration metric: took 9.244763ms for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.012009   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.012063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283
	I0722 10:49:52.012072   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.012082   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.012087   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.015146   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.015766   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.015784   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.015794   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.015801   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.018603   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.019054   24174 pod_ready.go:92] pod "etcd-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.019070   24174 pod_ready.go:81] duration metric: took 7.053565ms for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.019078   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.019122   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:49:52.019130   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.019142   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.019146   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.022351   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.022888   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:52.022901   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.022908   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.022912   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.025786   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.026300   24174 pod_ready.go:92] pod "etcd-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.026320   24174 pod_ready.go:81] duration metric: took 7.235909ms for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.026332   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.175726   24174 request.go:629] Waited for 149.300225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m03
	I0722 10:49:52.175783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m03
	I0722 10:49:52.175789   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.175796   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.175803   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.179606   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.375378   24174 request.go:629] Waited for 195.273197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:52.375445   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:52.375451   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.375458   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.375464   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.378558   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.379370   24174 pod_ready.go:92] pod "etcd-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.379384   24174 pod_ready.go:81] duration metric: took 353.046152ms for pod "etcd-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.379400   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.575549   24174 request.go:629] Waited for 196.096059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:49:52.575635   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:49:52.575650   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.575657   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.575661   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.578951   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.776165   24174 request.go:629] Waited for 196.343974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.776257   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.776269   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.776280   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.776287   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.779509   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.780233   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.780254   24174 pod_ready.go:81] duration metric: took 400.846867ms for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.780267   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.975277   24174 request.go:629] Waited for 194.944118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:49:52.975355   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:49:52.975363   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.975371   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.975377   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.979405   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:53.175500   24174 request.go:629] Waited for 195.358341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:53.175581   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:53.175595   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.175606   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.175613   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.179810   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:53.180530   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:53.180548   24174 pod_ready.go:81] duration metric: took 400.269537ms for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:53.180557   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:53.376195   24174 request.go:629] Waited for 195.540352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.376255   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.376260   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.376268   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.376274   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.379484   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:53.575517   24174 request.go:629] Waited for 195.277322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.575578   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.575583   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.575589   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.575594   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.579103   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:53.775997   24174 request.go:629] Waited for 95.253357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.776050   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.776055   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.776063   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.776067   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.779071   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:53.976250   24174 request.go:629] Waited for 196.379747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.976315   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.976322   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.976333   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.976341   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.979786   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.181473   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:54.181497   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.181507   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.181512   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.184611   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.375617   24174 request.go:629] Waited for 190.345543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:54.375704   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:54.375712   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.375720   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.375724   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.379330   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.380161   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:54.380180   24174 pod_ready.go:81] duration metric: took 1.199616581s for pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.380191   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.575609   24174 request.go:629] Waited for 195.343993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:49:54.575679   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:49:54.575685   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.575692   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.575697   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.579662   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.775880   24174 request.go:629] Waited for 195.319268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:54.775940   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:54.775947   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.775958   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.775965   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.779642   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.780628   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:54.780647   24174 pod_ready.go:81] duration metric: took 400.449567ms for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.780656   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.975688   24174 request.go:629] Waited for 194.945686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:49:54.975738   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:49:54.975743   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.975749   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.975753   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.979037   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.175286   24174 request.go:629] Waited for 195.301108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:55.175342   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:55.175348   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.175356   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.175365   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.179116   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.179656   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.179673   24174 pod_ready.go:81] duration metric: took 399.011357ms for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.179687   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.375695   24174 request.go:629] Waited for 195.933455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m03
	I0722 10:49:55.375783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m03
	I0722 10:49:55.375795   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.375807   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.375816   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.379578   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.575703   24174 request.go:629] Waited for 195.274723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:55.575758   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:55.575763   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.575770   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.575775   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.579123   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.579750   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.579769   24174 pod_ready.go:81] duration metric: took 400.074203ms for pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.579778   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.775854   24174 request.go:629] Waited for 196.003639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:49:55.775926   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:49:55.775937   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.775949   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.775961   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.779658   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.975779   24174 request.go:629] Waited for 195.258311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:55.975842   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:55.975847   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.975855   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.975861   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.979165   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.979751   24174 pod_ready.go:92] pod "kube-proxy-28zxf" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.979771   24174 pod_ready.go:81] duration metric: took 399.987026ms for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.979780   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.175411   24174 request.go:629] Waited for 195.565573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:49:56.175491   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:49:56.175500   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.175507   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.175511   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.179143   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.375754   24174 request.go:629] Waited for 195.399438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:56.375817   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:56.375825   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.375835   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.375842   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.379571   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.380445   24174 pod_ready.go:92] pod "kube-proxy-xkbsx" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:56.380466   24174 pod_ready.go:81] duration metric: took 400.679442ms for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.380479   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zdbjw" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.575388   24174 request.go:629] Waited for 194.828894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zdbjw
	I0722 10:49:56.575440   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zdbjw
	I0722 10:49:56.575447   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.575455   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.575462   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.579016   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.776132   24174 request.go:629] Waited for 196.361583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:56.776214   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:56.776225   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.776236   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.776244   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.779256   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:56.779921   24174 pod_ready.go:92] pod "kube-proxy-zdbjw" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:56.779941   24174 pod_ready.go:81] duration metric: took 399.455729ms for pod "kube-proxy-zdbjw" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.779958   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.975993   24174 request.go:629] Waited for 195.977344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:49:56.976047   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:49:56.976052   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.976061   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.976069   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.979391   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.175410   24174 request.go:629] Waited for 195.285956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:57.175470   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:57.175475   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.175483   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.175487   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.178950   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.179729   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.179746   24174 pod_ready.go:81] duration metric: took 399.780455ms for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.179756   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.375848   24174 request.go:629] Waited for 196.035002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:49:57.375947   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:49:57.375965   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.375991   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.376000   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.379397   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.575400   24174 request.go:629] Waited for 195.271015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:57.575465   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:57.575470   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.575477   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.575482   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.579006   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.579926   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.579944   24174 pod_ready.go:81] duration metric: took 400.18132ms for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.579956   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.776045   24174 request.go:629] Waited for 196.01819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m03
	I0722 10:49:57.776114   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m03
	I0722 10:49:57.776122   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.776132   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.776141   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.779891   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.976077   24174 request.go:629] Waited for 195.361716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:57.976142   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:57.976151   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.976162   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.976172   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.979683   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.980456   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.980475   24174 pod_ready.go:81] duration metric: took 400.51165ms for pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.980486   24174 pod_ready.go:38] duration metric: took 6.00085144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:49:57.980499   24174 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:49:57.980547   24174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:49:57.998327   24174 api_server.go:72] duration metric: took 19.353247057s to wait for apiserver process to appear ...
	I0722 10:49:57.998350   24174 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:49:57.998367   24174 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0722 10:49:58.005000   24174 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0722 10:49:58.005073   24174 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I0722 10:49:58.005085   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.005094   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.005100   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.005968   24174 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 10:49:58.006029   24174 api_server.go:141] control plane version: v1.30.3
	I0722 10:49:58.006044   24174 api_server.go:131] duration metric: took 7.687976ms to wait for apiserver health ...
	I0722 10:49:58.006053   24174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:49:58.175855   24174 request.go:629] Waited for 169.718373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.175899   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.175904   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.175916   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.175922   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.182153   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:49:58.191155   24174 system_pods.go:59] 24 kube-system pods found
	I0722 10:49:58.191185   24174 system_pods.go:61] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:49:58.191191   24174 system_pods.go:61] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:49:58.191197   24174 system_pods.go:61] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:49:58.191201   24174 system_pods.go:61] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:49:58.191205   24174 system_pods.go:61] "etcd-ha-461283-m03" [4e5fe31e-0b87-4ab1-8344-d6c7f7f4beb8] Running
	I0722 10:49:58.191209   24174 system_pods.go:61] "kindnet-9m2ms" [9b540ee3-5d01-422c-85e7-b5a5b7e2bcba] Running
	I0722 10:49:58.191214   24174 system_pods.go:61] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:49:58.191218   24174 system_pods.go:61] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:49:58.191223   24174 system_pods.go:61] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:49:58.191228   24174 system_pods.go:61] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:49:58.191236   24174 system_pods.go:61] "kube-apiserver-ha-461283-m03" [e0fd45ad-15f4-486f-a67d-c9e281f5b088] Running
	I0722 10:49:58.191242   24174 system_pods.go:61] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:49:58.191250   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:49:58.191255   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m03" [e5388816-2cb2-42eb-a732-fda7f45f77ea] Running
	I0722 10:49:58.191263   24174 system_pods.go:61] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:49:58.191268   24174 system_pods.go:61] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:49:58.191276   24174 system_pods.go:61] "kube-proxy-zdbjw" [f60a30fe-aa02-4f0c-ab22-c8c26a02d5e3] Running
	I0722 10:49:58.191282   24174 system_pods.go:61] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:49:58.191289   24174 system_pods.go:61] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:49:58.191324   24174 system_pods.go:61] "kube-scheduler-ha-461283-m03" [1ef00867-aff1-4ace-8608-446fe7a89777] Running
	I0722 10:49:58.191336   24174 system_pods.go:61] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:49:58.191342   24174 system_pods.go:61] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:49:58.191347   24174 system_pods.go:61] "kube-vip-ha-461283-m03" [1a8e6ea4-4cbb-4adb-bb70-63be44cbd682] Running
	I0722 10:49:58.191354   24174 system_pods.go:61] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:49:58.191362   24174 system_pods.go:74] duration metric: took 185.300855ms to wait for pod list to return data ...
	I0722 10:49:58.191374   24174 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:49:58.375870   24174 request.go:629] Waited for 184.421682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:49:58.375924   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:49:58.375929   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.375937   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.375942   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.379010   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:58.379135   24174 default_sa.go:45] found service account: "default"
	I0722 10:49:58.379150   24174 default_sa.go:55] duration metric: took 187.76681ms for default service account to be created ...
	I0722 10:49:58.379158   24174 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:49:58.575488   24174 request.go:629] Waited for 196.270322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.575554   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.575561   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.575571   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.575575   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.581970   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:49:58.588869   24174 system_pods.go:86] 24 kube-system pods found
	I0722 10:49:58.588894   24174 system_pods.go:89] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:49:58.588900   24174 system_pods.go:89] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:49:58.588904   24174 system_pods.go:89] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:49:58.588908   24174 system_pods.go:89] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:49:58.588912   24174 system_pods.go:89] "etcd-ha-461283-m03" [4e5fe31e-0b87-4ab1-8344-d6c7f7f4beb8] Running
	I0722 10:49:58.588916   24174 system_pods.go:89] "kindnet-9m2ms" [9b540ee3-5d01-422c-85e7-b5a5b7e2bcba] Running
	I0722 10:49:58.588920   24174 system_pods.go:89] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:49:58.588925   24174 system_pods.go:89] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:49:58.588932   24174 system_pods.go:89] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:49:58.588938   24174 system_pods.go:89] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:49:58.588945   24174 system_pods.go:89] "kube-apiserver-ha-461283-m03" [e0fd45ad-15f4-486f-a67d-c9e281f5b088] Running
	I0722 10:49:58.588952   24174 system_pods.go:89] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:49:58.588962   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:49:58.588967   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m03" [e5388816-2cb2-42eb-a732-fda7f45f77ea] Running
	I0722 10:49:58.588971   24174 system_pods.go:89] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:49:58.588975   24174 system_pods.go:89] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:49:58.588980   24174 system_pods.go:89] "kube-proxy-zdbjw" [f60a30fe-aa02-4f0c-ab22-c8c26a02d5e3] Running
	I0722 10:49:58.588984   24174 system_pods.go:89] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:49:58.588988   24174 system_pods.go:89] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:49:58.588993   24174 system_pods.go:89] "kube-scheduler-ha-461283-m03" [1ef00867-aff1-4ace-8608-446fe7a89777] Running
	I0722 10:49:58.588997   24174 system_pods.go:89] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:49:58.589002   24174 system_pods.go:89] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:49:58.589005   24174 system_pods.go:89] "kube-vip-ha-461283-m03" [1a8e6ea4-4cbb-4adb-bb70-63be44cbd682] Running
	I0722 10:49:58.589008   24174 system_pods.go:89] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:49:58.589015   24174 system_pods.go:126] duration metric: took 209.849845ms to wait for k8s-apps to be running ...
	I0722 10:49:58.589021   24174 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:49:58.589071   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:49:58.605159   24174 system_svc.go:56] duration metric: took 16.128323ms WaitForService to wait for kubelet
	I0722 10:49:58.605185   24174 kubeadm.go:582] duration metric: took 19.960108237s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:49:58.605208   24174 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:49:58.775691   24174 request.go:629] Waited for 170.39407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I0722 10:49:58.775750   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I0722 10:49:58.775758   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.775768   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.775777   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.779067   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:58.780404   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780428   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780443   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780448   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780454   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780458   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780464   24174 node_conditions.go:105] duration metric: took 175.248519ms to run NodePressure ...
	I0722 10:49:58.780480   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:49:58.780508   24174 start.go:255] writing updated cluster config ...
	I0722 10:49:58.780987   24174 ssh_runner.go:195] Run: rm -f paused
	I0722 10:49:58.833901   24174 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 10:49:58.835660   24174 out.go:177] * Done! kubectl is now configured to use "ha-461283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.615100802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbef4481-be1c-45e7-83f1-571a41bd7a8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.616583514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbef4481-be1c-45e7-83f1-571a41bd7a8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.651848641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=962e9780-e2d9-45a1-8e37-e901baf5358d name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.651936333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=962e9780-e2d9-45a1-8e37-e901baf5358d name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.653287884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b8a57ec-e790-4b68-91ad-2265ae4dad99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.653753176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645614653728889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b8a57ec-e790-4b68-91ad-2265ae4dad99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.654552656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8867fa7d-1971-4a25-8836-69b64e4ab976 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.654619951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8867fa7d-1971-4a25-8836-69b64e4ab976 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.654888805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8867fa7d-1971-4a25-8836-69b64e4ab976 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.683538921Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=ca881bd0-bb06-4a69-aae9-2776402c7ca9 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.683904660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca881bd0-bb06-4a69-aae9-2776402c7ca9 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.695360381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97619ca9-cb7f-491b-8dad-ab6f71343f95 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.695436587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97619ca9-cb7f-491b-8dad-ab6f71343f95 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.696819940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be6351a9-57f8-43ee-8d45-10c01ac97dae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.697445022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645614697421696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be6351a9-57f8-43ee-8d45-10c01ac97dae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.697997322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=938769d6-65b8-4da9-96b8-e08104377798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.698047669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=938769d6-65b8-4da9-96b8-e08104377798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.698265512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=938769d6-65b8-4da9-96b8-e08104377798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.734153241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29c796a4-add6-4d6c-8551-063365a0a2b2 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.734223673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29c796a4-add6-4d6c-8551-063365a0a2b2 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.735045165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8b8fdea-194d-41e0-a85f-43a2f85561a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.735493923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645614735472212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8b8fdea-194d-41e0-a85f-43a2f85561a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.736016959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ffe8be6-f177-4ac2-966c-091aa656237e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.736067300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ffe8be6-f177-4ac2-966c-091aa656237e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:53:34 ha-461283 crio[683]: time="2024-07-22 10:53:34.736295261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ffe8be6-f177-4ac2-966c-091aa656237e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e0d7d39c32b2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   816fd2e7cd706       busybox-fc5497c4f-hkw9v
	19f9af1e9784e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   df4c3d24ea139       storage-provisioner
	5920882be1f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   4723f41d773ba       coredns-7db6d8ff4d-zb547
	797ae9e61fe18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   0c2ec5e338fb3       coredns-7db6d8ff4d-qrfdd
	165b67d20aa98       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   e171bdcb5b84c       kindnet-hmrqh
	8ad5ed56ce259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   ffbce6c0af4bc       kube-proxy-28zxf
	b6533d7c334e7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   97b8ec6ae1c31       kube-vip-ha-461283
	70a36c3082983       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   54a1041d8e184       kube-scheduler-ha-461283
	08c8bf4f5df71       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   ca3273ac397ea       kube-controller-manager-ha-461283
	9ce5e449cc185       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   5d28c62eff243       kube-apiserver-ha-461283
	dc7da6bdaabcb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   e5abe1a443195       etcd-ha-461283
	
	
	==> coredns [5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43426 - 1850 "HINFO IN 2832132329847409715.878106688873651055. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010179034s
	[INFO] 10.244.2.2:34562 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01861668s
	[INFO] 10.244.1.2:53270 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.028944038s
	[INFO] 10.244.1.2:49060 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000094138s
	[INFO] 10.244.0.4:58821 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000212894s
	[INFO] 10.244.0.4:36629 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118072s
	[INFO] 10.244.0.4:39713 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00173787s
	[INFO] 10.244.2.2:34877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249226s
	[INFO] 10.244.2.2:47321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169139s
	[INFO] 10.244.2.2:37812 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009086884s
	[INFO] 10.244.2.2:48940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000477846s
	[INFO] 10.244.0.4:59919 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067175s
	[INFO] 10.244.2.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116023s
	[INFO] 10.244.2.2:46340 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079971s
	[INFO] 10.244.1.2:40840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133586s
	[INFO] 10.244.1.2:47315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158975s
	[INFO] 10.244.1.2:41268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093188s
	[INFO] 10.244.2.2:49311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014354s
	[INFO] 10.244.2.2:35152 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214208s
	[INFO] 10.244.1.2:60324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129417s
	[INFO] 10.244.1.2:58260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228807s
	[INFO] 10.244.1.2:39894 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113717s
	[INFO] 10.244.0.4:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152128s
	[INFO] 10.244.0.4:39699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074743s
	
	
	==> coredns [797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a] <==
	[INFO] 10.244.1.2:54694 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150701s
	[INFO] 10.244.1.2:34456 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147767s
	[INFO] 10.244.1.2:44962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367912s
	[INFO] 10.244.1.2:54147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063996s
	[INFO] 10.244.1.2:60170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000998s
	[INFO] 10.244.1.2:50008 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060128s
	[INFO] 10.244.0.4:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828391s
	[INFO] 10.244.0.4:43357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054533s
	[INFO] 10.244.0.4:60216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000029938s
	[INFO] 10.244.0.4:48124 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001149366s
	[INFO] 10.244.0.4:34363 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035155s
	[INFO] 10.244.0.4:44217 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049654s
	[INFO] 10.244.0.4:35448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035288s
	[INFO] 10.244.2.2:42369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105863s
	[INFO] 10.244.2.2:51781 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069936s
	[INFO] 10.244.1.2:47904 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103521s
	[INFO] 10.244.0.4:49081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120239s
	[INFO] 10.244.0.4:40762 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121632s
	[INFO] 10.244.0.4:59110 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066206s
	[INFO] 10.244.0.4:39650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092772s
	[INFO] 10.244.2.2:51074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000265828s
	[INFO] 10.244.2.2:58192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130056s
	[INFO] 10.244.1.2:54053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255068s
	[INFO] 10.244.0.4:50225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074972s
	[INFO] 10.244.0.4:44950 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080101s
	
	
	==> describe nodes <==
	Name:               ha-461283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-461283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7adceecddbb41f7a81e4df2b7433c7b
	  System UUID:                f7adceec-ddbb-41f7-a81e-4df2b7433c7b
	  Boot ID:                    16bdd5e7-d27f-4ce8-a232-7bbe4c4337c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkw9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-7db6d8ff4d-qrfdd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 coredns-7db6d8ff4d-zb547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 etcd-ha-461283                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-hmrqh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-461283             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-461283    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-proxy-28zxf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-461283             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-461283                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m4s   kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-461283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-461283 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-461283 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s   node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal  NodeReady                5m52s  kubelet          Node ha-461283 status is now: NodeReady
	  Normal  RegisteredNode           4m55s  node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal  RegisteredNode           3m43s  node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	
	
	Name:               ha-461283-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:51:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    ha-461283-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 164987e6e4bd4513b51bbf58f6e5b85b
	  System UUID:                164987e6-e4bd-4513-b51b-bf58f6e5b85b
	  Boot ID:                    e26a498d-a0e2-4cf4-8724-f393c49d215f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cgtcl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-461283-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-qsphb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-ha-461283-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-ha-461283-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-xkbsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-ha-461283-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-461283-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-461283-m02 status is now: NodeNotReady
	
	
	Name:               ha-461283-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_49_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:49:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-461283-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 daecc7f26d194772811b43378358ae92
	  System UUID:                daecc7f2-6d19-4772-811b-43378358ae92
	  Boot ID:                    d7ec2b29-5844-4c1f-be17-9ba20de6b894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bf5vn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-461283-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m
	  kube-system                 kindnet-9m2ms                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-ha-461283-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-ha-461283-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-zdbjw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-ha-461283-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-vip-ha-461283-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node ha-461283-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m2s)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal  RegisteredNode           3m43s                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	
	
	Name:               ha-461283-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_50_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:50:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-461283-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bf2f0ce1a340479f7577f27f1f3419
	  System UUID:                02bf2f0c-e1a3-4047-9f75-77f27f1f3419
	  Boot ID:                    872589a4-4f7b-4349-a791-7c244df230df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8h8rp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-proxy-q6mgq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  RegisteredNode           2m56s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-461283-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul22 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049866] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038978] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.505156] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.146448] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.618407] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.217704] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.054835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059084] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.188930] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Jul22 10:47] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.257396] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.205609] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +3.948218] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.066710] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.986663] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.075913] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.885402] kauditd_printk_skb: 18 callbacks suppressed
	[ +22.062510] kauditd_printk_skb: 38 callbacks suppressed
	[Jul22 10:48] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08] <==
	{"level":"warn","ts":"2024-07-22T10:53:34.604855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:34.704958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:34.804817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:34.904962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.01005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.021708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.031465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.036997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.040679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.043967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.051639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.059653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.068334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.072418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.075909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.08293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.091311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.096954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.100215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.103938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.104021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.111848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.118466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.124621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:53:35.166269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:53:35 up 6 min,  0 users,  load average: 0.29, 0.35, 0.19
	Linux ha-461283 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb] <==
	I0722 10:53:03.645303       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:53:13.645396       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:53:13.645563       1 main.go:299] handling current node
	I0722 10:53:13.645603       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:53:13.645622       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:53:13.645760       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:53:13.645879       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:53:13.645964       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:53:13.645984       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:53:23.639386       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:53:23.639456       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:53:23.639600       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:53:23.639625       1 main.go:299] handling current node
	I0722 10:53:23.639643       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:53:23.639648       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:53:23.639704       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:53:23.639723       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:53:33.636873       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:53:33.636982       1 main.go:299] handling current node
	I0722 10:53:33.637012       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:53:33.637030       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:53:33.637169       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:53:33.637190       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:53:33.637247       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:53:33.637270       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e] <==
	I0722 10:47:15.370718       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0722 10:47:15.378465       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.43]
	I0722 10:47:15.379695       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:47:15.385191       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:47:15.606948       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 10:47:16.584921       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 10:47:16.621767       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 10:47:16.635612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 10:47:29.719176       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0722 10:47:29.970504       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0722 10:50:03.848951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37786: use of closed network connection
	E0722 10:50:04.062242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37796: use of closed network connection
	E0722 10:50:04.266766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37818: use of closed network connection
	E0722 10:50:04.450098       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37830: use of closed network connection
	E0722 10:50:04.626203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37842: use of closed network connection
	E0722 10:50:04.807649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37862: use of closed network connection
	E0722 10:50:04.984137       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37874: use of closed network connection
	E0722 10:50:05.168704       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37894: use of closed network connection
	E0722 10:50:05.454231       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37920: use of closed network connection
	E0722 10:50:05.638627       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37934: use of closed network connection
	E0722 10:50:05.830620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37954: use of closed network connection
	E0722 10:50:05.993232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37970: use of closed network connection
	E0722 10:50:06.167765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37988: use of closed network connection
	E0722 10:50:06.331279       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37996: use of closed network connection
	W0722 10:51:25.386272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.43]
	
	
	==> kube-controller-manager [08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d] <==
	I0722 10:48:24.764445       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m02"
	I0722 10:49:33.858754       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-461283-m03\" does not exist"
	I0722 10:49:33.900100       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-461283-m03" podCIDRs=["10.244.2.0/24"]
	I0722 10:49:34.792168       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m03"
	I0722 10:49:59.736360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.282034ms"
	I0722 10:49:59.772290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.752381ms"
	I0722 10:49:59.775843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="343.44µs"
	I0722 10:49:59.778307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.883µs"
	I0722 10:49:59.902309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.263007ms"
	I0722 10:50:00.080226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.785274ms"
	I0722 10:50:00.102300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.713011ms"
	I0722 10:50:00.102508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.548µs"
	I0722 10:50:01.476259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.793µs"
	I0722 10:50:01.909019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.104631ms"
	I0722 10:50:01.909184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.29µs"
	I0722 10:50:02.292909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.197887ms"
	I0722 10:50:02.293065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.563µs"
	I0722 10:50:03.173507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.06045ms"
	I0722 10:50:03.174151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.901µs"
	I0722 10:50:36.409862       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-461283-m04\" does not exist"
	I0722 10:50:39.824309       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m04"
	I0722 10:50:54.481624       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-461283-m04"
	I0722 10:51:47.850513       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-461283-m04"
	I0722 10:51:47.952917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.983771ms"
	I0722 10:51:47.954751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.745µs"
	
	
	==> kube-proxy [8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44] <==
	I0722 10:47:30.927235       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:47:30.946260       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.43"]
	I0722 10:47:31.023909       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:47:31.023974       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:47:31.023996       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:47:31.033400       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:47:31.033901       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:47:31.034458       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:47:31.037219       1 config.go:192] "Starting service config controller"
	I0722 10:47:31.037422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:47:31.037494       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:47:31.037514       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:47:31.038605       1 config.go:319] "Starting node config controller"
	I0722 10:47:31.039656       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:47:31.137922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:47:31.138129       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:47:31.139959       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240] <==
	E0722 10:47:14.588060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:14.619707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:14.619822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:14.677040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:47:14.677088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:47:14.694693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:47:14.694841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:47:14.749285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:14.749332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:15.011695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:15.011746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:15.029154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 10:47:15.029252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0722 10:47:16.334995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 10:49:59.728583       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cgtcl\": pod busybox-fc5497c4f-cgtcl is already assigned to node \"ha-461283-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cgtcl" node="ha-461283-m02"
	E0722 10:49:59.729634       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cb9376f3-a8a3-4f85-a044-d0aa447ca494(default/busybox-fc5497c4f-cgtcl) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-cgtcl"
	E0722 10:49:59.729669       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cgtcl\": pod busybox-fc5497c4f-cgtcl is already assigned to node \"ha-461283-m02\"" pod="default/busybox-fc5497c4f-cgtcl"
	I0722 10:49:59.729715       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-cgtcl" node="ha-461283-m02"
	E0722 10:49:59.736195       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkw9v\": pod busybox-fc5497c4f-hkw9v is already assigned to node \"ha-461283\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-hkw9v" node="ha-461283"
	E0722 10:49:59.736638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 264707a6-61a4-4941-b996-0bebde73d4c7(default/busybox-fc5497c4f-hkw9v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-hkw9v"
	E0722 10:49:59.736744       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkw9v\": pod busybox-fc5497c4f-hkw9v is already assigned to node \"ha-461283\"" pod="default/busybox-fc5497c4f-hkw9v"
	I0722 10:49:59.736843       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-hkw9v" node="ha-461283"
	E0722 10:50:36.492116       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8h8rp\": pod kindnet-8h8rp is already assigned to node \"ha-461283-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8h8rp" node="ha-461283-m04"
	E0722 10:50:36.493842       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8h8rp\": pod kindnet-8h8rp is already assigned to node \"ha-461283-m04\"" pod="kube-system/kindnet-8h8rp"
	I0722 10:50:36.493969       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8h8rp" node="ha-461283-m04"
	
	
	==> kubelet <==
	Jul 22 10:49:59 ha-461283 kubelet[1372]: E0722 10:49:59.727821    1372 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-461283" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-461283' and this object
	Jul 22 10:49:59 ha-461283 kubelet[1372]: I0722 10:49:59.753956    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnkmn\" (UniqueName: \"kubernetes.io/projected/264707a6-61a4-4941-b996-0bebde73d4c7-kube-api-access-nnkmn\") pod \"busybox-fc5497c4f-hkw9v\" (UID: \"264707a6-61a4-4941-b996-0bebde73d4c7\") " pod="default/busybox-fc5497c4f-hkw9v"
	Jul 22 10:50:00 ha-461283 kubelet[1372]: E0722 10:50:00.916753    1372 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 22 10:50:00 ha-461283 kubelet[1372]: E0722 10:50:00.917248    1372 projected.go:200] Error preparing data for projected volume kube-api-access-nnkmn for pod default/busybox-fc5497c4f-hkw9v: failed to sync configmap cache: timed out waiting for the condition
	Jul 22 10:50:00 ha-461283 kubelet[1372]: E0722 10:50:00.918059    1372 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/264707a6-61a4-4941-b996-0bebde73d4c7-kube-api-access-nnkmn podName:264707a6-61a4-4941-b996-0bebde73d4c7 nodeName:}" failed. No retries permitted until 2024-07-22 10:50:01.417942208 +0000 UTC m=+165.061180480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nnkmn" (UniqueName: "kubernetes.io/projected/264707a6-61a4-4941-b996-0bebde73d4c7-kube-api-access-nnkmn") pod "busybox-fc5497c4f-hkw9v" (UID: "264707a6-61a4-4941-b996-0bebde73d4c7") : failed to sync configmap cache: timed out waiting for the condition
	Jul 22 10:50:16 ha-461283 kubelet[1372]: E0722 10:50:16.529544    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:50:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:51:16 ha-461283 kubelet[1372]: E0722 10:51:16.532151    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:51:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:52:16 ha-461283 kubelet[1372]: E0722 10:52:16.529087    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:52:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:53:16 ha-461283 kubelet[1372]: E0722 10:53:16.534219    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:53:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-461283 -n ha-461283
helpers_test.go:261: (dbg) Run:  kubectl --context ha-461283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (3.196948002s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:53:39.673357   28991 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:53:39.673453   28991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:39.673461   28991 out.go:304] Setting ErrFile to fd 2...
	I0722 10:53:39.673465   28991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:39.673660   28991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:53:39.673817   28991 out.go:298] Setting JSON to false
	I0722 10:53:39.673842   28991 mustload.go:65] Loading cluster: ha-461283
	I0722 10:53:39.673887   28991 notify.go:220] Checking for updates...
	I0722 10:53:39.674191   28991 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:53:39.674205   28991 status.go:255] checking status of ha-461283 ...
	I0722 10:53:39.674609   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.674667   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.694482   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0722 10:53:39.694994   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.695517   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.695540   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.695895   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.696196   28991 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:53:39.697788   28991 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:53:39.697804   28991 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:39.698067   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.698102   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.712372   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0722 10:53:39.712818   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.713273   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.713295   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.713611   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.713775   28991 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:53:39.717017   28991 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:39.717458   28991 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:39.717483   28991 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:39.717646   28991 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:39.718017   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.718058   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.733600   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0722 10:53:39.734030   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.734539   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.734563   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.734850   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.735016   28991 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:53:39.735224   28991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:39.735245   28991 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:53:39.738358   28991 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:39.738771   28991 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:39.738793   28991 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:39.738999   28991 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:53:39.739180   28991 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:53:39.739317   28991 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:53:39.739463   28991 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:53:39.815618   28991 ssh_runner.go:195] Run: systemctl --version
	I0722 10:53:39.821278   28991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:39.834757   28991 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:39.834785   28991 api_server.go:166] Checking apiserver status ...
	I0722 10:53:39.834822   28991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:39.848899   28991 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:53:39.859780   28991 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:39.859820   28991 ssh_runner.go:195] Run: ls
	I0722 10:53:39.864483   28991 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:39.870423   28991 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:39.870444   28991 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:53:39.870455   28991 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:39.870474   28991 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:53:39.870752   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.870797   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.885733   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
	I0722 10:53:39.886126   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.886641   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.886666   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.886987   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.887166   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:53:39.888835   28991 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:53:39.888854   28991 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:39.889124   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.889152   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.902849   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0722 10:53:39.903162   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.903560   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.903582   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.903868   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.904034   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:53:39.906942   28991 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:39.907355   28991 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:39.907381   28991 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:39.907468   28991 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:39.907737   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:39.907772   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:39.922037   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I0722 10:53:39.922401   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:39.922827   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:39.922849   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:39.923144   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:39.923284   28991 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:53:39.923463   28991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:39.923481   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:53:39.926081   28991 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:39.926494   28991 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:39.926527   28991 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:39.926655   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:53:39.926827   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:53:39.926927   28991 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:53:39.927077   28991 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:53:42.484588   28991 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:42.484671   28991 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:53:42.484689   28991 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:42.484698   28991 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:53:42.484713   28991 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:42.484722   28991 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:53:42.485152   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.485200   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.501032   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0722 10:53:42.501444   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.501884   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.501905   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.502204   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.502376   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:53:42.503847   28991 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:53:42.503864   28991 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:42.504189   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.504220   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.518315   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0722 10:53:42.518650   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.519100   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.519118   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.519402   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.519562   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:53:42.521944   28991 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:42.522275   28991 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:42.522304   28991 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:42.522435   28991 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:42.522718   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.522755   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.539383   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0722 10:53:42.539796   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.540204   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.540226   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.540542   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.540734   28991 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:53:42.540895   28991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:42.540920   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:53:42.543407   28991 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:42.543815   28991 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:42.543840   28991 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:42.543990   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:53:42.544160   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:53:42.544299   28991 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:53:42.544457   28991 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:53:42.627769   28991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:42.643661   28991 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:42.643690   28991 api_server.go:166] Checking apiserver status ...
	I0722 10:53:42.643727   28991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:42.659076   28991 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:53:42.669307   28991 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:42.669366   28991 ssh_runner.go:195] Run: ls
	I0722 10:53:42.674290   28991 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:42.679636   28991 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:42.679660   28991 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:53:42.679670   28991 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:42.679688   28991 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:53:42.679978   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.680018   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.694515   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0722 10:53:42.694952   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.695475   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.695503   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.695835   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.696116   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:53:42.697625   28991 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:53:42.697639   28991 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:42.697914   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.697944   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.712026   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0722 10:53:42.712459   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.712942   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.712970   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.713240   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.713407   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:53:42.715834   28991 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:42.716343   28991 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:42.716369   28991 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:42.716556   28991 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:42.716905   28991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:42.716948   28991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:42.731068   28991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0722 10:53:42.731484   28991 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:42.731944   28991 main.go:141] libmachine: Using API Version  1
	I0722 10:53:42.731960   28991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:42.732243   28991 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:42.732424   28991 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:53:42.732615   28991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:42.732630   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:53:42.735186   28991 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:42.735522   28991 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:42.735546   28991 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:42.735668   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:53:42.735851   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:53:42.736005   28991 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:53:42.736187   28991 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:53:42.815555   28991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:42.829708   28991 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (4.854619382s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:53:44.157037   29091 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:53:44.157157   29091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:44.157166   29091 out.go:304] Setting ErrFile to fd 2...
	I0722 10:53:44.157170   29091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:44.157351   29091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:53:44.157493   29091 out.go:298] Setting JSON to false
	I0722 10:53:44.157518   29091 mustload.go:65] Loading cluster: ha-461283
	I0722 10:53:44.157620   29091 notify.go:220] Checking for updates...
	I0722 10:53:44.157945   29091 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:53:44.157961   29091 status.go:255] checking status of ha-461283 ...
	I0722 10:53:44.158458   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.158491   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.178329   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I0722 10:53:44.178691   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.179183   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.179213   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.179532   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.179707   29091 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:53:44.181149   29091 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:53:44.181163   29091 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:44.181437   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.181474   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.195586   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0722 10:53:44.195893   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.196283   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.196300   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.196661   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.196851   29091 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:53:44.199339   29091 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:44.199696   29091 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:44.199731   29091 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:44.199830   29091 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:44.200197   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.200236   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.214803   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0722 10:53:44.215157   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.215576   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.215594   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.215920   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.216107   29091 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:53:44.216296   29091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:44.216316   29091 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:53:44.218901   29091 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:44.219292   29091 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:44.219326   29091 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:44.219443   29091 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:53:44.219599   29091 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:53:44.219739   29091 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:53:44.219882   29091 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:53:44.299850   29091 ssh_runner.go:195] Run: systemctl --version
	I0722 10:53:44.305809   29091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:44.320779   29091 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:44.320806   29091 api_server.go:166] Checking apiserver status ...
	I0722 10:53:44.320849   29091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:44.333988   29091 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:53:44.343438   29091 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:44.343486   29091 ssh_runner.go:195] Run: ls
	I0722 10:53:44.347618   29091 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:44.353251   29091 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:44.353271   29091 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:53:44.353278   29091 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:44.353295   29091 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:53:44.353583   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.353639   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.368540   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0722 10:53:44.368913   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.369364   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.369385   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.369706   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.369876   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:53:44.371299   29091 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:53:44.371314   29091 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:44.371599   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.371632   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.385606   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0722 10:53:44.385931   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.386402   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.386424   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.386708   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.386881   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:53:44.389649   29091 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:44.390025   29091 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:44.390051   29091 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:44.390173   29091 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:44.390470   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:44.390503   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:44.404931   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0722 10:53:44.405261   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:44.405647   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:44.405663   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:44.405923   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:44.406089   29091 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:53:44.406252   29091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:44.406272   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:53:44.408749   29091 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:44.409157   29091 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:44.409188   29091 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:44.409320   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:53:44.409471   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:53:44.409627   29091 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:53:44.409756   29091 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:53:45.556660   29091 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:45.556717   29091 retry.go:31] will retry after 165.550283ms: dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:48.628629   29091 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:48.628700   29091 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:53:48.628713   29091 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:48.628720   29091 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:53:48.628745   29091 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:48.628754   29091 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:53:48.629072   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.629109   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.644673   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0722 10:53:48.645050   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.645480   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.645499   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.645826   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.646025   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:53:48.647787   29091 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:53:48.647805   29091 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:48.648084   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.648126   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.661742   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0722 10:53:48.662155   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.662596   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.662616   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.662884   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.663045   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:53:48.665434   29091 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:48.665838   29091 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:48.665868   29091 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:48.666041   29091 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:48.666419   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.666481   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.680178   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0722 10:53:48.680509   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.680894   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.680911   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.681172   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.681352   29091 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:53:48.681521   29091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:48.681542   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:53:48.683726   29091 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:48.684083   29091 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:48.684112   29091 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:48.684234   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:53:48.684403   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:53:48.684548   29091 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:53:48.684684   29091 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:53:48.768179   29091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:48.787779   29091 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:48.787806   29091 api_server.go:166] Checking apiserver status ...
	I0722 10:53:48.787843   29091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:48.801338   29091 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:53:48.810759   29091 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:48.810804   29091 ssh_runner.go:195] Run: ls
	I0722 10:53:48.815632   29091 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:48.819934   29091 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:48.819953   29091 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:53:48.819963   29091 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:48.819977   29091 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:53:48.820264   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.820293   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.834542   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0722 10:53:48.834970   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.835414   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.835432   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.835742   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.835929   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:53:48.837488   29091 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:53:48.837504   29091 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:48.837787   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.837819   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.852365   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0722 10:53:48.852792   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.853197   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.853218   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.853515   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.853681   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:53:48.856261   29091 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:48.856744   29091 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:48.856773   29091 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:48.856906   29091 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:48.857208   29091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:48.857267   29091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:48.870790   29091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0722 10:53:48.871188   29091 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:48.871630   29091 main.go:141] libmachine: Using API Version  1
	I0722 10:53:48.871649   29091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:48.871945   29091 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:48.872122   29091 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:53:48.872308   29091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:48.872326   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:53:48.874900   29091 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:48.875300   29091 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:48.875330   29091 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:48.875439   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:53:48.875587   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:53:48.875703   29091 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:53:48.875831   29091 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:53:48.955976   29091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:48.971733   29091 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (4.208971446s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:53:51.081011   29192 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:53:51.081267   29192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:51.081278   29192 out.go:304] Setting ErrFile to fd 2...
	I0722 10:53:51.081284   29192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:51.081502   29192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:53:51.081680   29192 out.go:298] Setting JSON to false
	I0722 10:53:51.081715   29192 mustload.go:65] Loading cluster: ha-461283
	I0722 10:53:51.081775   29192 notify.go:220] Checking for updates...
	I0722 10:53:51.082197   29192 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:53:51.082222   29192 status.go:255] checking status of ha-461283 ...
	I0722 10:53:51.082705   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.082738   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.098028   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
	I0722 10:53:51.098449   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.098995   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.099019   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.099493   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.099741   29192 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:53:51.101197   29192 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:53:51.101215   29192 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:51.101485   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.101521   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.116131   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
	I0722 10:53:51.116488   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.116887   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.116901   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.117222   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.117386   29192 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:53:51.120085   29192 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:51.120466   29192 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:51.120501   29192 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:51.120567   29192 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:51.120839   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.120878   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.135941   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0722 10:53:51.136327   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.136737   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.136757   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.137052   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.137241   29192 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:53:51.137445   29192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:51.137482   29192 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:53:51.140002   29192 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:51.140428   29192 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:51.140446   29192 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:51.140577   29192 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:53:51.140741   29192 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:53:51.140866   29192 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:53:51.141021   29192 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:53:51.220669   29192 ssh_runner.go:195] Run: systemctl --version
	I0722 10:53:51.227013   29192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:51.244124   29192 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:51.244159   29192 api_server.go:166] Checking apiserver status ...
	I0722 10:53:51.244207   29192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:51.258175   29192 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:53:51.267097   29192 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:51.267147   29192 ssh_runner.go:195] Run: ls
	I0722 10:53:51.272686   29192 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:51.276673   29192 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:51.276693   29192 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:53:51.276702   29192 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:51.276715   29192 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:53:51.277005   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.277041   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.291354   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0722 10:53:51.291749   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.292230   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.292249   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.292619   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.292822   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:53:51.294434   29192 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:53:51.294451   29192 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:51.294861   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.294901   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.311070   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0722 10:53:51.311418   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.311871   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.311897   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.312198   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.312436   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:53:51.315200   29192 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:51.315610   29192 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:51.315638   29192 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:51.315838   29192 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:51.316116   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:51.316160   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:51.330790   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0722 10:53:51.331163   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:51.331624   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:51.331646   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:51.331925   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:51.332112   29192 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:53:51.332272   29192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:51.332291   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:53:51.335105   29192 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:51.335531   29192 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:51.335556   29192 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:51.335678   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:53:51.335955   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:53:51.336118   29192 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:53:51.336259   29192 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:53:51.704574   29192 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:51.704630   29192 retry.go:31] will retry after 130.332845ms: dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:54.900595   29192 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:53:54.900704   29192 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:53:54.900725   29192 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:54.900732   29192 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:53:54.900749   29192 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:54.900755   29192 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:53:54.901087   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:54.901137   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:54.915409   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0722 10:53:54.915853   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:54.916318   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:54.916342   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:54.916658   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:54.916846   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:53:54.918212   29192 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:53:54.918226   29192 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:54.918597   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:54.918633   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:54.932113   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0722 10:53:54.932449   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:54.932823   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:54.932842   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:54.933120   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:54.933280   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:53:54.935858   29192 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:54.936278   29192 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:54.936307   29192 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:54.936406   29192 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:53:54.936710   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:54.936752   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:54.950216   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0722 10:53:54.950614   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:54.951003   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:54.951019   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:54.951276   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:54.951435   29192 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:53:54.951594   29192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:54.951612   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:53:54.953943   29192 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:54.954295   29192 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:53:54.954321   29192 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:53:54.954447   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:53:54.954596   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:53:54.954710   29192 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:53:54.954874   29192 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:53:55.036500   29192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:55.052450   29192 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:55.052472   29192 api_server.go:166] Checking apiserver status ...
	I0722 10:53:55.052501   29192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:55.066431   29192 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:53:55.083716   29192 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:55.083763   29192 ssh_runner.go:195] Run: ls
	I0722 10:53:55.088304   29192 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:55.092995   29192 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:55.093013   29192 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:53:55.093022   29192 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:55.093041   29192 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:53:55.093423   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:55.093459   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:55.110230   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I0722 10:53:55.110606   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:55.111077   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:55.111100   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:55.111461   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:55.111622   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:53:55.113149   29192 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:53:55.113165   29192 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:55.113534   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:55.113593   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:55.127380   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I0722 10:53:55.127715   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:55.128151   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:55.128169   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:55.128465   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:55.128648   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:53:55.131031   29192 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:55.131434   29192 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:55.131459   29192 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:55.131569   29192 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:53:55.131836   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:55.131867   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:55.145424   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0722 10:53:55.145755   29192 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:55.146236   29192 main.go:141] libmachine: Using API Version  1
	I0722 10:53:55.146257   29192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:55.146537   29192 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:55.146734   29192 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:53:55.146940   29192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:55.146965   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:53:55.149640   29192 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:55.150036   29192 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:53:55.150072   29192 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:53:55.150177   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:53:55.150330   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:53:55.150490   29192 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:53:55.150639   29192 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:53:55.231497   29192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:55.246564   29192 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
E0722 10:53:56.771342   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (4.732604068s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:53:56.697166   29292 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:53:56.697321   29292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:56.697330   29292 out.go:304] Setting ErrFile to fd 2...
	I0722 10:53:56.697334   29292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:53:56.697506   29292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:53:56.697657   29292 out.go:298] Setting JSON to false
	I0722 10:53:56.697685   29292 mustload.go:65] Loading cluster: ha-461283
	I0722 10:53:56.697727   29292 notify.go:220] Checking for updates...
	I0722 10:53:56.698147   29292 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:53:56.698163   29292 status.go:255] checking status of ha-461283 ...
	I0722 10:53:56.698597   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.698642   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.716323   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33971
	I0722 10:53:56.716721   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.717248   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.717273   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.717635   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.717847   29292 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:53:56.719565   29292 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:53:56.719584   29292 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:56.719859   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.719896   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.734880   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0722 10:53:56.735224   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.735619   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.735638   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.735955   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.736125   29292 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:53:56.738886   29292 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:56.739296   29292 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:56.739327   29292 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:56.739454   29292 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:53:56.739780   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.739830   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.754074   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0722 10:53:56.754434   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.754852   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.754870   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.755155   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.755335   29292 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:53:56.755499   29292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:56.755522   29292 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:53:56.758173   29292 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:56.758582   29292 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:53:56.758615   29292 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:53:56.758705   29292 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:53:56.758829   29292 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:53:56.758959   29292 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:53:56.759054   29292 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:53:56.836340   29292 ssh_runner.go:195] Run: systemctl --version
	I0722 10:53:56.843190   29292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:53:56.857398   29292 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:53:56.857423   29292 api_server.go:166] Checking apiserver status ...
	I0722 10:53:56.857451   29292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:53:56.870462   29292 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:53:56.879621   29292 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:53:56.879676   29292 ssh_runner.go:195] Run: ls
	I0722 10:53:56.883712   29292 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:53:56.889421   29292 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:53:56.889440   29292 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:53:56.889449   29292 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:53:56.889470   29292 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:53:56.889766   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.889803   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.903948   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0722 10:53:56.904263   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.904709   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.904725   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.905041   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.905212   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:53:56.906508   29292 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:53:56.906523   29292 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:56.906791   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.906824   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.921356   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0722 10:53:56.921745   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.922123   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.922157   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.922442   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.922597   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:53:56.925230   29292 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:56.925628   29292 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:56.925654   29292 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:56.925794   29292 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:53:56.926166   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:53:56.926219   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:53:56.940167   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0722 10:53:56.940579   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:53:56.941015   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:53:56.941033   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:53:56.941304   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:53:56.941449   29292 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:53:56.941599   29292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:53:56.941621   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:53:56.944033   29292 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:56.944446   29292 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:53:56.944472   29292 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:53:56.944598   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:53:56.944755   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:53:56.944901   29292 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:53:56.945034   29292 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:53:57.972743   29292 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:53:57.972787   29292 retry.go:31] will retry after 358.314542ms: dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:54:01.044655   29292 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:54:01.044746   29292 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:54:01.044762   29292 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:01.044788   29292 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:54:01.044810   29292 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:01.044817   29292 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:01.045111   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.045151   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.059610   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0722 10:54:01.060038   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.060515   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.060538   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.060871   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.061042   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:01.062294   29292 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:01.062308   29292 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:01.062628   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.062679   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.079459   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0722 10:54:01.079844   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.080292   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.080318   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.080628   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.080795   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:01.083578   29292 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:01.084003   29292 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:01.084046   29292 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:01.084213   29292 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:01.084644   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.084686   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.098523   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0722 10:54:01.098909   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.099331   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.099348   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.099694   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.099889   29292 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:01.100083   29292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:01.100102   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:01.102623   29292 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:01.102953   29292 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:01.102988   29292 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:01.103112   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:01.103239   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:01.103385   29292 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:01.103507   29292 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:01.191500   29292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:01.205919   29292 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:01.205942   29292 api_server.go:166] Checking apiserver status ...
	I0722 10:54:01.205970   29292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:01.219647   29292 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:01.228623   29292 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:01.228687   29292 ssh_runner.go:195] Run: ls
	I0722 10:54:01.232834   29292 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:01.237175   29292 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:01.237195   29292 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:01.237203   29292 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:01.237230   29292 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:01.237584   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.237632   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.251753   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I0722 10:54:01.252164   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.252605   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.252626   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.252927   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.253132   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:01.254483   29292 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:01.254503   29292 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:01.254763   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.254821   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.268843   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0722 10:54:01.269184   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.269614   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.269631   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.269955   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.270127   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:01.272807   29292 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:01.273181   29292 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:01.273207   29292 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:01.273343   29292 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:01.273600   29292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:01.273638   29292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:01.288122   29292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0722 10:54:01.288529   29292 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:01.288940   29292 main.go:141] libmachine: Using API Version  1
	I0722 10:54:01.288962   29292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:01.289230   29292 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:01.289382   29292 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:01.289556   29292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:01.289574   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:01.292062   29292 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:01.292471   29292 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:01.292495   29292 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:01.292626   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:01.292787   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:01.292931   29292 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:01.293052   29292 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:01.372307   29292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:01.387479   29292 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (4.558091746s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:03.246425   29409 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:03.246521   29409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:03.246531   29409 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:03.246536   29409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:03.246727   29409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:03.246912   29409 out.go:298] Setting JSON to false
	I0722 10:54:03.246941   29409 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:03.246983   29409 notify.go:220] Checking for updates...
	I0722 10:54:03.247397   29409 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:03.247415   29409 status.go:255] checking status of ha-461283 ...
	I0722 10:54:03.247830   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.247872   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.267803   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0722 10:54:03.268270   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.268816   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.268839   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.269312   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.269544   29409 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:54:03.271335   29409 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:54:03.271349   29409 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:03.271637   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.271674   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.287327   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0722 10:54:03.287670   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.288094   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.288115   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.288447   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.288635   29409 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:54:03.291256   29409 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:03.291615   29409 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:03.291650   29409 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:03.291728   29409 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:03.292001   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.292030   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.306160   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0722 10:54:03.306586   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.307003   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.307019   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.307272   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.307417   29409 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:54:03.307606   29409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:03.307627   29409 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:54:03.310095   29409 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:03.310430   29409 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:03.310460   29409 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:03.310552   29409 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:54:03.310699   29409 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:54:03.310847   29409 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:54:03.311009   29409 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:54:03.387836   29409 ssh_runner.go:195] Run: systemctl --version
	I0722 10:54:03.393919   29409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:03.408365   29409 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:03.408406   29409 api_server.go:166] Checking apiserver status ...
	I0722 10:54:03.408437   29409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:03.420740   29409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:54:03.429380   29409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:03.429432   29409 ssh_runner.go:195] Run: ls
	I0722 10:54:03.433436   29409 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:03.437646   29409 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:03.437671   29409 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:54:03.437684   29409 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:03.437704   29409 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:54:03.438048   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.438080   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.452797   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0722 10:54:03.453223   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.453635   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.453654   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.453922   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.454067   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:54:03.455515   29409 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:54:03.455530   29409 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:54:03.455787   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.455815   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.470849   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0722 10:54:03.471206   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.471629   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.471656   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.471961   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.472159   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:54:03.474793   29409 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:03.475246   29409 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:54:03.475275   29409 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:03.475405   29409 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:54:03.475670   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:03.475699   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:03.490642   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I0722 10:54:03.491070   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:03.491551   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:03.491568   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:03.491931   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:03.492112   29409 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:54:03.492288   29409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:03.492308   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:54:03.495046   29409 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:03.495469   29409 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:54:03.495500   29409 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:03.495650   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:54:03.495846   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:54:03.495974   29409 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:54:03.496125   29409 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:54:04.120624   29409 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:04.120689   29409 retry.go:31] will retry after 213.255035ms: dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:54:07.416630   29409 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:54:07.416718   29409 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:54:07.416742   29409 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:07.416751   29409 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:54:07.416785   29409 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:07.416798   29409 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:07.417192   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.417246   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.431807   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0722 10:54:07.432187   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.432650   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.432666   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.433047   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.433255   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:07.434781   29409 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:07.434806   29409 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:07.435098   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.435130   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.451676   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0722 10:54:07.452052   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.452472   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.452497   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.452845   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.453032   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:07.455499   29409 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:07.455970   29409 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:07.456005   29409 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:07.456126   29409 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:07.456457   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.456493   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.470582   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0722 10:54:07.470943   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.471417   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.471431   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.471774   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.471962   29409 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:07.472158   29409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:07.472173   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:07.474832   29409 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:07.475283   29409 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:07.475313   29409 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:07.475472   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:07.475644   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:07.475797   29409 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:07.475965   29409 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:07.559811   29409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:07.575239   29409 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:07.575265   29409 api_server.go:166] Checking apiserver status ...
	I0722 10:54:07.575298   29409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:07.589910   29409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:07.600324   29409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:07.600390   29409 ssh_runner.go:195] Run: ls
	I0722 10:54:07.604746   29409 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:07.608847   29409 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:07.608865   29409 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:07.608872   29409 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:07.608893   29409 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:07.609155   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.609184   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.623673   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I0722 10:54:07.624078   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.624550   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.624570   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.624851   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.625034   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:07.626430   29409 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:07.626444   29409 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:07.626719   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.626755   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.642174   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0722 10:54:07.642540   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.642963   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.642976   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.643214   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.643396   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:07.646351   29409 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:07.646850   29409 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:07.646894   29409 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:07.647074   29409 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:07.647454   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:07.647493   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:07.662120   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0722 10:54:07.662583   29409 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:07.663186   29409 main.go:141] libmachine: Using API Version  1
	I0722 10:54:07.663205   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:07.663557   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:07.663728   29409 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:07.663891   29409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:07.663908   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:07.666516   29409 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:07.666879   29409 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:07.666913   29409 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:07.667026   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:07.667202   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:07.667309   29409 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:07.667424   29409 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:07.751566   29409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:07.766283   29409 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (3.722397124s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:11.215806   29511 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:11.216071   29511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:11.216080   29511 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:11.216084   29511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:11.216329   29511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:11.216532   29511 out.go:298] Setting JSON to false
	I0722 10:54:11.216559   29511 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:11.216610   29511 notify.go:220] Checking for updates...
	I0722 10:54:11.216982   29511 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:11.216997   29511 status.go:255] checking status of ha-461283 ...
	I0722 10:54:11.217397   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.217428   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.237327   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0722 10:54:11.237725   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.238331   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.238345   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.238707   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.238930   29511 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:54:11.240511   29511 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:54:11.240524   29511 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:11.240859   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.240917   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.255093   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0722 10:54:11.255467   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.255875   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.255910   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.256188   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.256340   29511 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:54:11.258635   29511 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:11.259011   29511 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:11.259038   29511 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:11.259172   29511 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:11.259588   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.259626   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.274043   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I0722 10:54:11.274393   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.274795   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.274809   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.275115   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.275279   29511 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:54:11.275469   29511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:11.275486   29511 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:54:11.279530   29511 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:11.279903   29511 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:11.279922   29511 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:11.280089   29511 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:54:11.280242   29511 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:54:11.280427   29511 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:54:11.280587   29511 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:54:11.361276   29511 ssh_runner.go:195] Run: systemctl --version
	I0722 10:54:11.367162   29511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:11.381458   29511 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:11.381482   29511 api_server.go:166] Checking apiserver status ...
	I0722 10:54:11.381509   29511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:11.395592   29511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:54:11.405594   29511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:11.405636   29511 ssh_runner.go:195] Run: ls
	I0722 10:54:11.410090   29511 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:11.414637   29511 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:11.414661   29511 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:54:11.414677   29511 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:11.414697   29511 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:54:11.415117   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.415162   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.433710   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0722 10:54:11.434115   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.434613   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.434637   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.434947   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.435134   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:54:11.436740   29511 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 10:54:11.436757   29511 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:54:11.437038   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.437067   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.453291   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0722 10:54:11.453638   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.454148   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.454171   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.454439   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.454627   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:54:11.457363   29511 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:11.457805   29511 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:54:11.457837   29511 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:11.458036   29511 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 10:54:11.458336   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:11.458373   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:11.472562   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0722 10:54:11.472949   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:11.473368   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:11.473389   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:11.473676   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:11.473836   29511 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:54:11.474017   29511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:11.474037   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:54:11.476544   29511 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:11.476953   29511 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:54:11.476967   29511 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:54:11.477118   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:54:11.477281   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:54:11.477431   29511 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:54:11.477559   29511 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	W0722 10:54:14.548619   29511 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.207:22: connect: no route to host
	W0722 10:54:14.548720   29511 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	E0722 10:54:14.548751   29511 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:14.548763   29511 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 10:54:14.548788   29511 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	I0722 10:54:14.548803   29511 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:14.549109   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.549157   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.563752   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0722 10:54:14.564148   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.564685   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.564710   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.565050   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.565251   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:14.566761   29511 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:14.566787   29511 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:14.567088   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.567124   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.581556   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37941
	I0722 10:54:14.581962   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.582409   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.582441   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.582733   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.582902   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:14.586256   29511 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:14.586655   29511 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:14.586686   29511 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:14.586837   29511 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:14.587128   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.587174   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.601902   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0722 10:54:14.602306   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.602727   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.602749   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.603040   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.603204   29511 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:14.603361   29511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:14.603376   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:14.606105   29511 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:14.606472   29511 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:14.606504   29511 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:14.606691   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:14.606842   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:14.606991   29511 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:14.607141   29511 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:14.693270   29511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:14.710082   29511 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:14.710136   29511 api_server.go:166] Checking apiserver status ...
	I0722 10:54:14.710175   29511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:14.723004   29511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:14.731691   29511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:14.731729   29511 ssh_runner.go:195] Run: ls
	I0722 10:54:14.735616   29511 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:14.742702   29511 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:14.742725   29511 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:14.742734   29511 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:14.742753   29511 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:14.743060   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.743094   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.758966   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0722 10:54:14.759302   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.759739   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.759770   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.760066   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.760268   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:14.761769   29511 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:14.761785   29511 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:14.762050   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.762080   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.775802   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0722 10:54:14.776181   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.776662   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.776685   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.777091   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.777266   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:14.780208   29511 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:14.780645   29511 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:14.780669   29511 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:14.780821   29511 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:14.781213   29511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:14.781256   29511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:14.794656   29511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0722 10:54:14.795023   29511 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:14.795450   29511 main.go:141] libmachine: Using API Version  1
	I0722 10:54:14.795471   29511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:14.795731   29511 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:14.795836   29511 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:14.796050   29511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:14.796067   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:14.798515   29511 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:14.798915   29511 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:14.798936   29511 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:14.799060   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:14.799228   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:14.799381   29511 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:14.799535   29511 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:14.884269   29511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:14.898091   29511 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 7 (614.414718ms)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:19.283806   29648 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:19.283916   29648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:19.283927   29648 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:19.283932   29648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:19.284116   29648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:19.284288   29648 out.go:298] Setting JSON to false
	I0722 10:54:19.284320   29648 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:19.284354   29648 notify.go:220] Checking for updates...
	I0722 10:54:19.284866   29648 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:19.284896   29648 status.go:255] checking status of ha-461283 ...
	I0722 10:54:19.285352   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.285393   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.305998   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0722 10:54:19.306317   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.306959   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.306990   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.307287   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.307480   29648 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:54:19.309339   29648 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:54:19.309351   29648 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:19.309647   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.309682   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.324354   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0722 10:54:19.324787   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.325283   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.325302   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.325595   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.325782   29648 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:54:19.328601   29648 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:19.329041   29648 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:19.329062   29648 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:19.329198   29648 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:19.329504   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.329542   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.343667   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0722 10:54:19.344045   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.344496   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.344519   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.344819   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.345010   29648 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:54:19.345234   29648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:19.345255   29648 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:54:19.347704   29648 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:19.348103   29648 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:19.348128   29648 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:19.348256   29648 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:54:19.348434   29648 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:54:19.348566   29648 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:54:19.348703   29648 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:54:19.429993   29648 ssh_runner.go:195] Run: systemctl --version
	I0722 10:54:19.437482   29648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:19.455971   29648 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:19.455995   29648 api_server.go:166] Checking apiserver status ...
	I0722 10:54:19.456030   29648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:19.477887   29648 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:54:19.488303   29648 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:19.488355   29648 ssh_runner.go:195] Run: ls
	I0722 10:54:19.493332   29648 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:19.497486   29648 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:19.497508   29648 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:54:19.497521   29648 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:19.497538   29648 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:54:19.497923   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.497964   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.512362   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0722 10:54:19.512799   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.513238   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.513257   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.513587   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.513745   29648 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:54:19.515323   29648 status.go:330] ha-461283-m02 host status = "Stopped" (err=<nil>)
	I0722 10:54:19.515340   29648 status.go:343] host is not running, skipping remaining checks
	I0722 10:54:19.515348   29648 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:19.515370   29648 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:19.515654   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.515687   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.529712   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
	I0722 10:54:19.530111   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.530561   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.530584   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.530889   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.531073   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:19.532483   29648 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:19.532499   29648 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:19.532843   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.532879   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.547743   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36431
	I0722 10:54:19.548292   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.548865   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.548901   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.549267   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.549458   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:19.552117   29648 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:19.552609   29648 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:19.552655   29648 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:19.552828   29648 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:19.553230   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.553273   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.567282   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0722 10:54:19.567611   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.568026   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.568046   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.568413   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.568583   29648 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:19.568823   29648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:19.568890   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:19.571315   29648 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:19.571711   29648 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:19.571737   29648 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:19.571877   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:19.572052   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:19.572201   29648 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:19.572347   29648 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:19.655783   29648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:19.669886   29648 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:19.669911   29648 api_server.go:166] Checking apiserver status ...
	I0722 10:54:19.669941   29648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:19.682752   29648 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:19.691119   29648 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:19.691158   29648 ssh_runner.go:195] Run: ls
	I0722 10:54:19.695260   29648 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:19.699799   29648 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:19.699822   29648 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:19.699830   29648 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:19.699842   29648 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:19.700153   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.700185   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.714533   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0722 10:54:19.714938   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.715361   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.715379   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.715658   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.715840   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:19.717489   29648 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:19.717504   29648 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:19.717895   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.717933   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.732483   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0722 10:54:19.732872   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.733284   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.733305   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.733575   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.733754   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:19.736295   29648 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:19.736776   29648 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:19.736798   29648 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:19.736957   29648 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:19.737299   29648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:19.737339   29648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:19.753432   29648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0722 10:54:19.753837   29648 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:19.754289   29648 main.go:141] libmachine: Using API Version  1
	I0722 10:54:19.754308   29648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:19.754600   29648 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:19.754753   29648 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:19.754950   29648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:19.754971   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:19.757319   29648 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:19.757684   29648 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:19.757713   29648 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:19.757818   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:19.757987   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:19.758112   29648 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:19.758239   29648 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:19.840376   29648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:19.855757   29648 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 7 (606.004025ms)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:28.141110   29752 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:28.141207   29752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:28.141215   29752 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:28.141219   29752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:28.141385   29752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:28.141525   29752 out.go:298] Setting JSON to false
	I0722 10:54:28.141550   29752 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:28.141643   29752 notify.go:220] Checking for updates...
	I0722 10:54:28.141888   29752 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:28.141900   29752 status.go:255] checking status of ha-461283 ...
	I0722 10:54:28.142340   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.142394   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.161860   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0722 10:54:28.162339   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.162951   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.162981   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.163359   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.163548   29752 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:54:28.165235   29752 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:54:28.165260   29752 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:28.165529   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.165565   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.180365   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0722 10:54:28.180724   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.181214   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.181228   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.181488   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.181663   29752 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:54:28.184356   29752 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:28.184811   29752 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:28.184843   29752 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:28.184944   29752 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:28.185259   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.185301   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.199670   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0722 10:54:28.200027   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.200445   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.200464   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.200814   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.201008   29752 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:54:28.201208   29752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:28.201228   29752 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:54:28.203633   29752 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:28.204033   29752 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:28.204071   29752 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:28.204171   29752 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:54:28.204323   29752 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:54:28.204483   29752 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:54:28.204622   29752 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:54:28.285546   29752 ssh_runner.go:195] Run: systemctl --version
	I0722 10:54:28.292113   29752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:28.310500   29752 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:28.310527   29752 api_server.go:166] Checking apiserver status ...
	I0722 10:54:28.310565   29752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:28.326376   29752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:54:28.338518   29752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:28.338587   29752 ssh_runner.go:195] Run: ls
	I0722 10:54:28.344430   29752 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:28.350316   29752 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:28.350340   29752 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:54:28.350348   29752 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:28.350364   29752 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:54:28.350668   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.350708   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.365006   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0722 10:54:28.365430   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.365946   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.365968   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.366214   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.366385   29752 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:54:28.367805   29752 status.go:330] ha-461283-m02 host status = "Stopped" (err=<nil>)
	I0722 10:54:28.367815   29752 status.go:343] host is not running, skipping remaining checks
	I0722 10:54:28.367840   29752 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:28.367858   29752 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:28.368144   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.368185   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.381677   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0722 10:54:28.381998   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.382468   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.382487   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.382752   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.382931   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:28.384160   29752 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:28.384183   29752 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:28.384466   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.384495   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.398196   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0722 10:54:28.398609   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.399082   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.399103   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.399377   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.399582   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:28.401904   29752 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:28.402261   29752 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:28.402289   29752 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:28.402406   29752 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:28.402667   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.402698   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.416063   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0722 10:54:28.416412   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.416764   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.416781   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.417156   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.417303   29752 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:28.417464   29752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:28.417494   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:28.419785   29752 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:28.420139   29752 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:28.420161   29752 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:28.420312   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:28.420491   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:28.420649   29752 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:28.420784   29752 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:28.504194   29752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:28.520070   29752 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:28.520094   29752 api_server.go:166] Checking apiserver status ...
	I0722 10:54:28.520139   29752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:28.533819   29752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:28.544396   29752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:28.544433   29752 ssh_runner.go:195] Run: ls
	I0722 10:54:28.548846   29752 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:28.553271   29752 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:28.553290   29752 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:28.553299   29752 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:28.553325   29752 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:28.553608   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.553648   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.568104   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0722 10:54:28.568590   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.569102   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.569123   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.569408   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.569582   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:28.571086   29752 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:28.571101   29752 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:28.571448   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.571484   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.586945   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0722 10:54:28.587354   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.587839   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.587866   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.588157   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.588311   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:28.590859   29752 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:28.591269   29752 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:28.591316   29752 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:28.591420   29752 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:28.591712   29752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:28.591749   29752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:28.605934   29752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0722 10:54:28.606289   29752 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:28.606706   29752 main.go:141] libmachine: Using API Version  1
	I0722 10:54:28.606727   29752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:28.607058   29752 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:28.607238   29752 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:28.607447   29752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:28.607468   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:28.610202   29752 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:28.610619   29752 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:28.610667   29752 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:28.610771   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:28.610920   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:28.611064   29752 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:28.611186   29752 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:28.691648   29752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:28.706061   29752 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 7 (607.482361ms)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-461283-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:39.322970   29858 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:39.323086   29858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:39.323095   29858 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:39.323101   29858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:39.323295   29858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:39.323469   29858 out.go:298] Setting JSON to false
	I0722 10:54:39.323499   29858 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:39.323603   29858 notify.go:220] Checking for updates...
	I0722 10:54:39.323889   29858 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:39.323904   29858 status.go:255] checking status of ha-461283 ...
	I0722 10:54:39.324251   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.324293   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.343958   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0722 10:54:39.344328   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.344809   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.344829   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.345142   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.345306   29858 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:54:39.346800   29858 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 10:54:39.346813   29858 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:39.347129   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.347176   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.361466   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I0722 10:54:39.361883   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.362395   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.362418   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.362759   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.362965   29858 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:54:39.365948   29858 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:39.366400   29858 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:39.366430   29858 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:39.366578   29858 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:54:39.366843   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.366877   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.381229   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0722 10:54:39.381594   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.382026   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.382048   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.382414   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.382601   29858 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:54:39.382790   29858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:39.382813   29858 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:54:39.385130   29858 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:39.385510   29858 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:54:39.385526   29858 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:54:39.385692   29858 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:54:39.385868   29858 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:54:39.385988   29858 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:54:39.386099   29858 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:54:39.472049   29858 ssh_runner.go:195] Run: systemctl --version
	I0722 10:54:39.478888   29858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:39.494631   29858 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:39.494655   29858 api_server.go:166] Checking apiserver status ...
	I0722 10:54:39.494691   29858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:39.509061   29858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0722 10:54:39.517702   29858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:39.517754   29858 ssh_runner.go:195] Run: ls
	I0722 10:54:39.522002   29858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:39.526152   29858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:39.526172   29858 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 10:54:39.526184   29858 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:39.526204   29858 status.go:255] checking status of ha-461283-m02 ...
	I0722 10:54:39.526536   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.526576   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.541720   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0722 10:54:39.542051   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.542472   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.542491   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.542771   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.542934   29858 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:54:39.544273   29858 status.go:330] ha-461283-m02 host status = "Stopped" (err=<nil>)
	I0722 10:54:39.544284   29858 status.go:343] host is not running, skipping remaining checks
	I0722 10:54:39.544291   29858 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:39.544306   29858 status.go:255] checking status of ha-461283-m03 ...
	I0722 10:54:39.544618   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.544655   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.558058   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I0722 10:54:39.558369   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.558753   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.558770   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.559057   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.559229   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:39.560486   29858 status.go:330] ha-461283-m03 host status = "Running" (err=<nil>)
	I0722 10:54:39.560502   29858 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:39.560775   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.560812   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.575446   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0722 10:54:39.575811   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.576183   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.576206   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.576520   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.576681   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:54:39.579198   29858 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:39.579573   29858 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:39.579592   29858 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:39.579741   29858 host.go:66] Checking if "ha-461283-m03" exists ...
	I0722 10:54:39.580031   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.580077   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.597064   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I0722 10:54:39.597472   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.597986   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.598013   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.598309   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.598524   29858 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:39.598733   29858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:39.598755   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:39.601852   29858 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:39.602397   29858 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:39.602422   29858 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:39.602574   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:39.602734   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:39.602906   29858 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:39.603089   29858 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:39.687763   29858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:39.704031   29858 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 10:54:39.704054   29858 api_server.go:166] Checking apiserver status ...
	I0722 10:54:39.704080   29858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:54:39.719008   29858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	W0722 10:54:39.729334   29858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 10:54:39.729374   29858 ssh_runner.go:195] Run: ls
	I0722 10:54:39.733852   29858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 10:54:39.738318   29858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 10:54:39.738340   29858 status.go:422] ha-461283-m03 apiserver status = Running (err=<nil>)
	I0722 10:54:39.738349   29858 status.go:257] ha-461283-m03 status: &{Name:ha-461283-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 10:54:39.738362   29858 status.go:255] checking status of ha-461283-m04 ...
	I0722 10:54:39.738676   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.738719   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.753315   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0722 10:54:39.753710   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.754223   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.754246   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.754532   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.754680   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:39.756322   29858 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 10:54:39.756338   29858 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:39.756734   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.756774   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.771407   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0722 10:54:39.771774   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.772197   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.772218   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.772539   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.772749   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 10:54:39.775615   29858 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:39.776048   29858 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:39.776070   29858 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:39.776302   29858 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 10:54:39.776615   29858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:39.776658   29858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:39.791590   29858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0722 10:54:39.791985   29858 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:39.792472   29858 main.go:141] libmachine: Using API Version  1
	I0722 10:54:39.792495   29858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:39.792778   29858 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:39.792956   29858 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:39.793156   29858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 10:54:39.793177   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:39.795689   29858 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:39.796093   29858 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:39.796119   29858 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:39.796266   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:39.796453   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:39.796610   29858 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:39.796710   29858 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:39.876256   29858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:54:39.891004   29858 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-461283 -n ha-461283
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-461283 logs -n 25: (1.297945964s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m03_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m04 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp testdata/cp-test.txt                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m04_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03:/home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m03 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-461283 node stop m02 -v=7                                                     | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-461283 node start m02 -v=7                                                    | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:46:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:46:38.194055   24174 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:46:38.194160   24174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:46:38.194171   24174 out.go:304] Setting ErrFile to fd 2...
	I0722 10:46:38.194176   24174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:46:38.194345   24174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:46:38.194890   24174 out.go:298] Setting JSON to false
	I0722 10:46:38.195769   24174 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1750,"bootTime":1721643448,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:46:38.195821   24174 start.go:139] virtualization: kvm guest
	I0722 10:46:38.197620   24174 out.go:177] * [ha-461283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:46:38.198991   24174 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:46:38.198999   24174 notify.go:220] Checking for updates...
	I0722 10:46:38.200433   24174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:46:38.201651   24174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:46:38.202977   24174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.204061   24174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:46:38.205109   24174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:46:38.206337   24174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:46:38.239044   24174 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 10:46:38.240138   24174 start.go:297] selected driver: kvm2
	I0722 10:46:38.240155   24174 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:46:38.240180   24174 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:46:38.240938   24174 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:46:38.241043   24174 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:46:38.254722   24174 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:46:38.254755   24174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:46:38.254971   24174 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:46:38.255017   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:46:38.255028   24174 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 10:46:38.255034   24174 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 10:46:38.255094   24174 start.go:340] cluster config:
	{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0722 10:46:38.255187   24174 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:46:38.256698   24174 out.go:177] * Starting "ha-461283" primary control-plane node in "ha-461283" cluster
	I0722 10:46:38.257819   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:46:38.257842   24174 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:46:38.257848   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:46:38.257917   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:46:38.257927   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:46:38.258204   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:46:38.258224   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json: {Name:mk97f47cbaa54f35c862f0dd28f13f83cf708a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:46:38.258341   24174 start.go:360] acquireMachinesLock for ha-461283: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:46:38.258374   24174 start.go:364] duration metric: took 21.789µs to acquireMachinesLock for "ha-461283"
	I0722 10:46:38.258394   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:46:38.258442   24174 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 10:46:38.259793   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:46:38.259890   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:46:38.259924   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:46:38.273144   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0722 10:46:38.273503   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:46:38.273966   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:46:38.273993   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:46:38.274285   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:46:38.274483   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:38.274631   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:38.274770   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:46:38.274797   24174 client.go:168] LocalClient.Create starting
	I0722 10:46:38.274835   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:46:38.274877   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:46:38.274897   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:46:38.274974   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:46:38.275005   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:46:38.275024   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:46:38.275048   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:46:38.275069   24174 main.go:141] libmachine: (ha-461283) Calling .PreCreateCheck
	I0722 10:46:38.275386   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:38.275778   24174 main.go:141] libmachine: Creating machine...
	I0722 10:46:38.275794   24174 main.go:141] libmachine: (ha-461283) Calling .Create
	I0722 10:46:38.275914   24174 main.go:141] libmachine: (ha-461283) Creating KVM machine...
	I0722 10:46:38.277030   24174 main.go:141] libmachine: (ha-461283) DBG | found existing default KVM network
	I0722 10:46:38.277623   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.277507   24197 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0722 10:46:38.277646   24174 main.go:141] libmachine: (ha-461283) DBG | created network xml: 
	I0722 10:46:38.277655   24174 main.go:141] libmachine: (ha-461283) DBG | <network>
	I0722 10:46:38.277660   24174 main.go:141] libmachine: (ha-461283) DBG |   <name>mk-ha-461283</name>
	I0722 10:46:38.277666   24174 main.go:141] libmachine: (ha-461283) DBG |   <dns enable='no'/>
	I0722 10:46:38.277669   24174 main.go:141] libmachine: (ha-461283) DBG |   
	I0722 10:46:38.277675   24174 main.go:141] libmachine: (ha-461283) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 10:46:38.277681   24174 main.go:141] libmachine: (ha-461283) DBG |     <dhcp>
	I0722 10:46:38.277691   24174 main.go:141] libmachine: (ha-461283) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 10:46:38.277711   24174 main.go:141] libmachine: (ha-461283) DBG |     </dhcp>
	I0722 10:46:38.277720   24174 main.go:141] libmachine: (ha-461283) DBG |   </ip>
	I0722 10:46:38.277729   24174 main.go:141] libmachine: (ha-461283) DBG |   
	I0722 10:46:38.277741   24174 main.go:141] libmachine: (ha-461283) DBG | </network>
	I0722 10:46:38.277746   24174 main.go:141] libmachine: (ha-461283) DBG | 
	I0722 10:46:38.282356   24174 main.go:141] libmachine: (ha-461283) DBG | trying to create private KVM network mk-ha-461283 192.168.39.0/24...
	I0722 10:46:38.343389   24174 main.go:141] libmachine: (ha-461283) DBG | private KVM network mk-ha-461283 192.168.39.0/24 created
	I0722 10:46:38.343429   24174 main.go:141] libmachine: (ha-461283) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 ...
	I0722 10:46:38.343444   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.343373   24197 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.343456   24174 main.go:141] libmachine: (ha-461283) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:46:38.343486   24174 main.go:141] libmachine: (ha-461283) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:46:38.577561   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.577453   24197 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa...
	I0722 10:46:38.713410   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.713279   24197 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/ha-461283.rawdisk...
	I0722 10:46:38.713441   24174 main.go:141] libmachine: (ha-461283) DBG | Writing magic tar header
	I0722 10:46:38.713455   24174 main.go:141] libmachine: (ha-461283) DBG | Writing SSH key tar header
	I0722 10:46:38.713468   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:38.713386   24197 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 ...
	I0722 10:46:38.713482   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283
	I0722 10:46:38.713578   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:46:38.713602   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283 (perms=drwx------)
	I0722 10:46:38.713614   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:46:38.713627   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:46:38.713638   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:46:38.713654   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:46:38.713666   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:46:38.713679   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:46:38.713686   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:46:38.713701   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:46:38.713714   24174 main.go:141] libmachine: (ha-461283) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:46:38.713728   24174 main.go:141] libmachine: (ha-461283) DBG | Checking permissions on dir: /home
	I0722 10:46:38.713740   24174 main.go:141] libmachine: (ha-461283) Creating domain...
	I0722 10:46:38.713759   24174 main.go:141] libmachine: (ha-461283) DBG | Skipping /home - not owner
	I0722 10:46:38.714729   24174 main.go:141] libmachine: (ha-461283) define libvirt domain using xml: 
	I0722 10:46:38.714787   24174 main.go:141] libmachine: (ha-461283) <domain type='kvm'>
	I0722 10:46:38.714800   24174 main.go:141] libmachine: (ha-461283)   <name>ha-461283</name>
	I0722 10:46:38.714808   24174 main.go:141] libmachine: (ha-461283)   <memory unit='MiB'>2200</memory>
	I0722 10:46:38.714818   24174 main.go:141] libmachine: (ha-461283)   <vcpu>2</vcpu>
	I0722 10:46:38.714841   24174 main.go:141] libmachine: (ha-461283)   <features>
	I0722 10:46:38.714855   24174 main.go:141] libmachine: (ha-461283)     <acpi/>
	I0722 10:46:38.714864   24174 main.go:141] libmachine: (ha-461283)     <apic/>
	I0722 10:46:38.714875   24174 main.go:141] libmachine: (ha-461283)     <pae/>
	I0722 10:46:38.714898   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.714912   24174 main.go:141] libmachine: (ha-461283)   </features>
	I0722 10:46:38.714921   24174 main.go:141] libmachine: (ha-461283)   <cpu mode='host-passthrough'>
	I0722 10:46:38.714931   24174 main.go:141] libmachine: (ha-461283)   
	I0722 10:46:38.714942   24174 main.go:141] libmachine: (ha-461283)   </cpu>
	I0722 10:46:38.714954   24174 main.go:141] libmachine: (ha-461283)   <os>
	I0722 10:46:38.714962   24174 main.go:141] libmachine: (ha-461283)     <type>hvm</type>
	I0722 10:46:38.714998   24174 main.go:141] libmachine: (ha-461283)     <boot dev='cdrom'/>
	I0722 10:46:38.715021   24174 main.go:141] libmachine: (ha-461283)     <boot dev='hd'/>
	I0722 10:46:38.715033   24174 main.go:141] libmachine: (ha-461283)     <bootmenu enable='no'/>
	I0722 10:46:38.715043   24174 main.go:141] libmachine: (ha-461283)   </os>
	I0722 10:46:38.715054   24174 main.go:141] libmachine: (ha-461283)   <devices>
	I0722 10:46:38.715065   24174 main.go:141] libmachine: (ha-461283)     <disk type='file' device='cdrom'>
	I0722 10:46:38.715080   24174 main.go:141] libmachine: (ha-461283)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/boot2docker.iso'/>
	I0722 10:46:38.715095   24174 main.go:141] libmachine: (ha-461283)       <target dev='hdc' bus='scsi'/>
	I0722 10:46:38.715106   24174 main.go:141] libmachine: (ha-461283)       <readonly/>
	I0722 10:46:38.715115   24174 main.go:141] libmachine: (ha-461283)     </disk>
	I0722 10:46:38.715126   24174 main.go:141] libmachine: (ha-461283)     <disk type='file' device='disk'>
	I0722 10:46:38.715138   24174 main.go:141] libmachine: (ha-461283)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:46:38.715153   24174 main.go:141] libmachine: (ha-461283)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/ha-461283.rawdisk'/>
	I0722 10:46:38.715164   24174 main.go:141] libmachine: (ha-461283)       <target dev='hda' bus='virtio'/>
	I0722 10:46:38.715176   24174 main.go:141] libmachine: (ha-461283)     </disk>
	I0722 10:46:38.715185   24174 main.go:141] libmachine: (ha-461283)     <interface type='network'>
	I0722 10:46:38.715197   24174 main.go:141] libmachine: (ha-461283)       <source network='mk-ha-461283'/>
	I0722 10:46:38.715207   24174 main.go:141] libmachine: (ha-461283)       <model type='virtio'/>
	I0722 10:46:38.715216   24174 main.go:141] libmachine: (ha-461283)     </interface>
	I0722 10:46:38.715226   24174 main.go:141] libmachine: (ha-461283)     <interface type='network'>
	I0722 10:46:38.715237   24174 main.go:141] libmachine: (ha-461283)       <source network='default'/>
	I0722 10:46:38.715247   24174 main.go:141] libmachine: (ha-461283)       <model type='virtio'/>
	I0722 10:46:38.715291   24174 main.go:141] libmachine: (ha-461283)     </interface>
	I0722 10:46:38.715309   24174 main.go:141] libmachine: (ha-461283)     <serial type='pty'>
	I0722 10:46:38.715319   24174 main.go:141] libmachine: (ha-461283)       <target port='0'/>
	I0722 10:46:38.715329   24174 main.go:141] libmachine: (ha-461283)     </serial>
	I0722 10:46:38.715341   24174 main.go:141] libmachine: (ha-461283)     <console type='pty'>
	I0722 10:46:38.715358   24174 main.go:141] libmachine: (ha-461283)       <target type='serial' port='0'/>
	I0722 10:46:38.715381   24174 main.go:141] libmachine: (ha-461283)     </console>
	I0722 10:46:38.715392   24174 main.go:141] libmachine: (ha-461283)     <rng model='virtio'>
	I0722 10:46:38.715404   24174 main.go:141] libmachine: (ha-461283)       <backend model='random'>/dev/random</backend>
	I0722 10:46:38.715413   24174 main.go:141] libmachine: (ha-461283)     </rng>
	I0722 10:46:38.715421   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.715430   24174 main.go:141] libmachine: (ha-461283)     
	I0722 10:46:38.715441   24174 main.go:141] libmachine: (ha-461283)   </devices>
	I0722 10:46:38.715461   24174 main.go:141] libmachine: (ha-461283) </domain>
	I0722 10:46:38.715475   24174 main.go:141] libmachine: (ha-461283) 
	I0722 10:46:38.719160   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:5d:41:e6 in network default
	I0722 10:46:38.719639   24174 main.go:141] libmachine: (ha-461283) Ensuring networks are active...
	I0722 10:46:38.719654   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:38.720334   24174 main.go:141] libmachine: (ha-461283) Ensuring network default is active
	I0722 10:46:38.720652   24174 main.go:141] libmachine: (ha-461283) Ensuring network mk-ha-461283 is active
	I0722 10:46:38.721108   24174 main.go:141] libmachine: (ha-461283) Getting domain xml...
	I0722 10:46:38.721719   24174 main.go:141] libmachine: (ha-461283) Creating domain...
	I0722 10:46:39.878056   24174 main.go:141] libmachine: (ha-461283) Waiting to get IP...
	I0722 10:46:39.878814   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:39.879213   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:39.879239   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:39.879169   24197 retry.go:31] will retry after 211.051521ms: waiting for machine to come up
	I0722 10:46:40.091502   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.091910   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.091938   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.091865   24197 retry.go:31] will retry after 243.80033ms: waiting for machine to come up
	I0722 10:46:40.337448   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.337829   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.337860   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.337793   24197 retry.go:31] will retry after 313.296222ms: waiting for machine to come up
	I0722 10:46:40.652162   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:40.652703   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:40.652730   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:40.652659   24197 retry.go:31] will retry after 491.357157ms: waiting for machine to come up
	I0722 10:46:41.145220   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:41.145735   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:41.145755   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:41.145693   24197 retry.go:31] will retry after 713.551121ms: waiting for machine to come up
	I0722 10:46:41.860641   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:41.861057   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:41.861085   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:41.861020   24197 retry.go:31] will retry after 599.546633ms: waiting for machine to come up
	I0722 10:46:42.461744   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:42.462129   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:42.462173   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:42.462100   24197 retry.go:31] will retry after 984.367854ms: waiting for machine to come up
	I0722 10:46:43.448943   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:43.449367   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:43.449395   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:43.449311   24197 retry.go:31] will retry after 1.326982923s: waiting for machine to come up
	I0722 10:46:44.777306   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:44.777665   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:44.777688   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:44.777626   24197 retry.go:31] will retry after 1.827526011s: waiting for machine to come up
	I0722 10:46:46.607846   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:46.608257   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:46.608296   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:46.608222   24197 retry.go:31] will retry after 2.205030482s: waiting for machine to come up
	I0722 10:46:48.814467   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:48.814895   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:48.814922   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:48.814858   24197 retry.go:31] will retry after 2.262882594s: waiting for machine to come up
	I0722 10:46:51.080211   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:51.080642   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:51.080664   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:51.080600   24197 retry.go:31] will retry after 3.047165474s: waiting for machine to come up
	I0722 10:46:54.129188   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:54.129583   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find current IP address of domain ha-461283 in network mk-ha-461283
	I0722 10:46:54.129609   24174 main.go:141] libmachine: (ha-461283) DBG | I0722 10:46:54.129546   24197 retry.go:31] will retry after 4.354207961s: waiting for machine to come up
	I0722 10:46:58.484970   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.485388   24174 main.go:141] libmachine: (ha-461283) Found IP for machine: 192.168.39.43
	I0722 10:46:58.485422   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has current primary IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.485431   24174 main.go:141] libmachine: (ha-461283) Reserving static IP address...
	I0722 10:46:58.485749   24174 main.go:141] libmachine: (ha-461283) DBG | unable to find host DHCP lease matching {name: "ha-461283", mac: "52:54:00:1d:42:30", ip: "192.168.39.43"} in network mk-ha-461283
	I0722 10:46:58.551564   24174 main.go:141] libmachine: (ha-461283) DBG | Getting to WaitForSSH function...
	I0722 10:46:58.551595   24174 main.go:141] libmachine: (ha-461283) Reserved static IP address: 192.168.39.43
	I0722 10:46:58.551609   24174 main.go:141] libmachine: (ha-461283) Waiting for SSH to be available...
	I0722 10:46:58.553973   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.554325   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.554361   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.554435   24174 main.go:141] libmachine: (ha-461283) DBG | Using SSH client type: external
	I0722 10:46:58.554469   24174 main.go:141] libmachine: (ha-461283) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa (-rw-------)
	I0722 10:46:58.554495   24174 main.go:141] libmachine: (ha-461283) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:46:58.554507   24174 main.go:141] libmachine: (ha-461283) DBG | About to run SSH command:
	I0722 10:46:58.554519   24174 main.go:141] libmachine: (ha-461283) DBG | exit 0
	I0722 10:46:58.676276   24174 main.go:141] libmachine: (ha-461283) DBG | SSH cmd err, output: <nil>: 
	I0722 10:46:58.676593   24174 main.go:141] libmachine: (ha-461283) KVM machine creation complete!
	I0722 10:46:58.677059   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:58.677560   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:58.677746   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:58.677893   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:46:58.677908   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:46:58.679105   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:46:58.679116   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:46:58.679123   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:46:58.679138   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.681266   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.681691   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.681728   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.681856   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.682022   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.682179   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.682310   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.682472   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.682715   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.682730   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:46:58.783807   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:46:58.783826   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:46:58.783832   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.786347   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.786666   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.786693   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.786919   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.787100   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.787269   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.787384   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.787501   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.787685   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.787698   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:46:58.888947   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:46:58.889019   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:46:58.889029   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:46:58.889038   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:58.889291   24174 buildroot.go:166] provisioning hostname "ha-461283"
	I0722 10:46:58.889315   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:58.889495   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:58.891793   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.892098   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:58.892121   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:58.892266   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:58.892431   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.892563   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:58.892682   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:58.892835   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:58.893049   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:58.893067   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283 && echo "ha-461283" | sudo tee /etc/hostname
	I0722 10:46:59.005733   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:46:59.005754   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.008176   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.008431   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.008453   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.008599   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.008776   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.008937   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.009050   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.009170   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.009376   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.009392   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:46:59.117019   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:46:59.117044   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:46:59.117075   24174 buildroot.go:174] setting up certificates
	I0722 10:46:59.117084   24174 provision.go:84] configureAuth start
	I0722 10:46:59.117095   24174 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:46:59.117349   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.120000   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.120358   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.120399   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.120555   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.122736   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.123042   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.123066   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.123133   24174 provision.go:143] copyHostCerts
	I0722 10:46:59.123171   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:46:59.123208   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:46:59.123238   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:46:59.123316   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:46:59.123404   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:46:59.123435   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:46:59.123444   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:46:59.123480   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:46:59.123547   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:46:59.123570   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:46:59.123578   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:46:59.123608   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:46:59.123667   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283 san=[127.0.0.1 192.168.39.43 ha-461283 localhost minikube]
	I0722 10:46:59.316403   24174 provision.go:177] copyRemoteCerts
	I0722 10:46:59.316458   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:46:59.316480   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.319080   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.319360   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.319380   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.319564   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.319736   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.319891   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.319990   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.399168   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:46:59.399235   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:46:59.423274   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:46:59.423338   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 10:46:59.445969   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:46:59.446021   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:46:59.468209   24174 provision.go:87] duration metric: took 351.114311ms to configureAuth
	I0722 10:46:59.468231   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:46:59.468397   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:46:59.468470   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.470912   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.471209   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.471227   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.471423   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.471612   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.471770   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.471924   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.472084   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.472240   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.472257   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:46:59.731437   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:46:59.731467   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:46:59.731478   24174 main.go:141] libmachine: (ha-461283) Calling .GetURL
	I0722 10:46:59.732656   24174 main.go:141] libmachine: (ha-461283) DBG | Using libvirt version 6000000
	I0722 10:46:59.734495   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.734771   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.734796   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.734958   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:46:59.734980   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:46:59.734992   24174 client.go:171] duration metric: took 21.460185416s to LocalClient.Create
	I0722 10:46:59.735015   24174 start.go:167] duration metric: took 21.460246012s to libmachine.API.Create "ha-461283"
	I0722 10:46:59.735025   24174 start.go:293] postStartSetup for "ha-461283" (driver="kvm2")
	I0722 10:46:59.735035   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:46:59.735051   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.735297   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:46:59.735321   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.737204   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.737493   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.737517   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.737642   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.737815   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.737981   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.738127   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.819805   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:46:59.824341   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:46:59.824367   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:46:59.824465   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:46:59.824587   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:46:59.824600   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:46:59.824708   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:46:59.834595   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:46:59.857726   24174 start.go:296] duration metric: took 122.68973ms for postStartSetup
	I0722 10:46:59.857770   24174 main.go:141] libmachine: (ha-461283) Calling .GetConfigRaw
	I0722 10:46:59.858306   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.860770   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.861135   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.861160   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.861375   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:46:59.861563   24174 start.go:128] duration metric: took 21.603112314s to createHost
	I0722 10:46:59.861601   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.863578   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.863856   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.863885   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.863991   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.864163   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.864295   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.864429   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.864560   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:46:59.864702   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:46:59.864712   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:46:59.964927   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645219.938237293
	
	I0722 10:46:59.964946   24174 fix.go:216] guest clock: 1721645219.938237293
	I0722 10:46:59.964953   24174 fix.go:229] Guest: 2024-07-22 10:46:59.938237293 +0000 UTC Remote: 2024-07-22 10:46:59.86157437 +0000 UTC m=+21.708119370 (delta=76.662923ms)
	I0722 10:46:59.964971   24174 fix.go:200] guest clock delta is within tolerance: 76.662923ms
	I0722 10:46:59.964976   24174 start.go:83] releasing machines lock for "ha-461283", held for 21.706593928s
	I0722 10:46:59.964990   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.965205   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:46:59.967418   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.967665   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.967693   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.967837   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968278   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968460   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:46:59.968576   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:46:59.968623   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.968625   24174 ssh_runner.go:195] Run: cat /version.json
	I0722 10:46:59.968644   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:46:59.970974   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971073   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971317   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.971343   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971367   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:46:59.971383   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:46:59.971456   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.971615   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:46:59.971621   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.971780   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.971793   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:46:59.971933   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:46:59.971946   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:46:59.972070   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:00.075078   24174 ssh_runner.go:195] Run: systemctl --version
	I0722 10:47:00.081128   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:47:00.245140   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:47:00.251293   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:47:00.251349   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:47:00.269861   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:47:00.269885   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:47:00.269940   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:47:00.286084   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:47:00.300419   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:47:00.300490   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:47:00.314748   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:47:00.328509   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:47:00.439875   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:47:00.607897   24174 docker.go:233] disabling docker service ...
	I0722 10:47:00.607966   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:47:00.622144   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:47:00.634895   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:47:00.742615   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:47:00.850525   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:47:00.864521   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:47:00.882277   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:47:00.882346   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.892619   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:47:00.892678   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.903021   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.913199   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.923386   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:47:00.933947   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.944265   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.960405   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:00.970616   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:47:00.979918   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:47:00.979972   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:47:00.991686   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:47:01.001055   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:01.106372   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:47:01.238492   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:47:01.238570   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:47:01.243407   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:47:01.243452   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:47:01.247174   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:47:01.286447   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:47:01.286530   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:01.314485   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:01.343254   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:47:01.344418   24174 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:47:01.346906   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:01.347301   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:01.347333   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:01.347522   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:47:01.351572   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:01.364615   24174 kubeadm.go:883] updating cluster {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:47:01.364707   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:47:01.364746   24174 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:47:01.396482   24174 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 10:47:01.396559   24174 ssh_runner.go:195] Run: which lz4
	I0722 10:47:01.400470   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0722 10:47:01.400580   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 10:47:01.404612   24174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 10:47:01.404633   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 10:47:02.790648   24174 crio.go:462] duration metric: took 1.390105316s to copy over tarball
	I0722 10:47:02.790722   24174 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 10:47:04.927301   24174 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.136542439s)
	I0722 10:47:04.927336   24174 crio.go:469] duration metric: took 2.136663526s to extract the tarball
	I0722 10:47:04.927345   24174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 10:47:04.965923   24174 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:47:05.015846   24174 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:47:05.015868   24174 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:47:05.015877   24174 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.30.3 crio true true} ...
	I0722 10:47:05.016104   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:47:05.016199   24174 ssh_runner.go:195] Run: crio config
	I0722 10:47:05.060548   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:47:05.060566   24174 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 10:47:05.060576   24174 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:47:05.060601   24174 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-461283 NodeName:ha-461283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:47:05.060750   24174 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-461283"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:47:05.060774   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:47:05.060823   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:47:05.079086   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:47:05.079207   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:47:05.079260   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:05.089756   24174 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:47:05.089823   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 10:47:05.099468   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0722 10:47:05.115987   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:47:05.131994   24174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0722 10:47:05.148077   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0722 10:47:05.164679   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:47:05.168827   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:05.180730   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:05.320481   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:47:05.337743   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.43
	I0722 10:47:05.337764   24174 certs.go:194] generating shared ca certs ...
	I0722 10:47:05.337783   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.337933   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:47:05.337982   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:47:05.337995   24174 certs.go:256] generating profile certs ...
	I0722 10:47:05.338053   24174 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:47:05.338069   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt with IP's: []
	I0722 10:47:05.383714   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt ...
	I0722 10:47:05.383743   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt: {Name:mkb171df70710be618a58bf690afb21e809e5818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.383934   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key ...
	I0722 10:47:05.383948   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key: {Name:mkff020491afb1adea70aef1c3934b3ad6f7ba79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.384050   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9
	I0722 10:47:05.384075   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.254]
	I0722 10:47:05.468803   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 ...
	I0722 10:47:05.468832   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9: {Name:mkb1e692f29ef9c1a8256a9539ef7be1ada40148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.469010   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9 ...
	I0722 10:47:05.469026   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9: {Name:mk338616cb090895bedf9e1ac4cddee28ec5e7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.469130   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.9d9c95a9 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:47:05.469220   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.9d9c95a9 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:47:05.469298   24174 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:47:05.469320   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt with IP's: []
	I0722 10:47:05.673958   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt ...
	I0722 10:47:05.673990   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt: {Name:mk6f787c87e693afa89eca8ff9fe8efd0b927b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.674166   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key ...
	I0722 10:47:05.674179   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key: {Name:mka2dfacbc83fe7edf41518e908d2a8e0a927e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:05.674273   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:47:05.674294   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:47:05.674309   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:47:05.674328   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:47:05.674347   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:47:05.674367   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:47:05.674385   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:47:05.674401   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:47:05.674463   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:47:05.674509   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:47:05.674522   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:47:05.674556   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:47:05.674587   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:47:05.674617   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:47:05.674666   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:05.674702   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.674721   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.674740   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:05.675282   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:47:05.700991   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:47:05.724214   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:47:05.746822   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:47:05.770107   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 10:47:05.793099   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:47:05.815971   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:47:05.838649   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:47:05.861123   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:47:05.884059   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:47:05.906560   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:47:05.928529   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:47:05.943858   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:47:05.949452   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:47:05.959767   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.964070   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.964114   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:47:05.969978   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:47:05.980549   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:47:05.990675   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.994821   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:47:05.994867   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:47:06.000315   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:47:06.010704   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:47:06.021094   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.025386   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.025424   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:06.030837   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:47:06.041101   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:47:06.045007   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:47:06.045059   24174 kubeadm.go:392] StartCluster: {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:47:06.045125   24174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:47:06.045188   24174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:47:06.084180   24174 cri.go:89] found id: ""
	I0722 10:47:06.084238   24174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 10:47:06.094148   24174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 10:47:06.103367   24174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 10:47:06.115692   24174 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 10:47:06.115713   24174 kubeadm.go:157] found existing configuration files:
	
	I0722 10:47:06.115758   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 10:47:06.124940   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 10:47:06.124985   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 10:47:06.148183   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 10:47:06.159533   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 10:47:06.159604   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 10:47:06.173428   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 10:47:06.187329   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 10:47:06.187387   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 10:47:06.198205   24174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 10:47:06.207240   24174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 10:47:06.207293   24174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 10:47:06.216307   24174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 10:47:06.329090   24174 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 10:47:06.329235   24174 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 10:47:06.471260   24174 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 10:47:06.471393   24174 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 10:47:06.471511   24174 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 10:47:06.681235   24174 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 10:47:06.788849   24174 out.go:204]   - Generating certificates and keys ...
	I0722 10:47:06.788955   24174 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 10:47:06.789033   24174 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 10:47:06.919200   24174 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 10:47:06.980563   24174 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 10:47:07.147794   24174 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 10:47:07.230076   24174 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 10:47:07.496079   24174 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 10:47:07.496246   24174 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-461283 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0722 10:47:07.808389   24174 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 10:47:07.808536   24174 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-461283 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0722 10:47:07.890205   24174 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 10:47:08.131805   24174 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 10:47:08.307800   24174 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 10:47:08.307885   24174 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 10:47:08.467741   24174 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 10:47:08.601683   24174 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 10:47:08.817858   24174 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 10:47:09.028565   24174 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 10:47:09.111319   24174 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 10:47:09.111923   24174 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 10:47:09.114692   24174 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 10:47:09.116366   24174 out.go:204]   - Booting up control plane ...
	I0722 10:47:09.116481   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 10:47:09.116556   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 10:47:09.118341   24174 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 10:47:09.133851   24174 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 10:47:09.134721   24174 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 10:47:09.134783   24174 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 10:47:09.290939   24174 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 10:47:09.291041   24174 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 10:47:09.790524   24174 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.330082ms
	I0722 10:47:09.790609   24174 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 10:47:15.775977   24174 kubeadm.go:310] [api-check] The API server is healthy after 5.989151305s
	I0722 10:47:15.787856   24174 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 10:47:15.805406   24174 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 10:47:15.835921   24174 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 10:47:15.836164   24174 kubeadm.go:310] [mark-control-plane] Marking the node ha-461283 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 10:47:15.847227   24174 kubeadm.go:310] [bootstrap-token] Using token: vshj1k.w5z6g3thto8ie6ws
	I0722 10:47:15.848559   24174 out.go:204]   - Configuring RBAC rules ...
	I0722 10:47:15.848677   24174 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 10:47:15.854509   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 10:47:15.862066   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 10:47:15.869511   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 10:47:15.874443   24174 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 10:47:15.878295   24174 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 10:47:16.183381   24174 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 10:47:16.636868   24174 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 10:47:17.182238   24174 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 10:47:17.183358   24174 kubeadm.go:310] 
	I0722 10:47:17.183451   24174 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 10:47:17.183462   24174 kubeadm.go:310] 
	I0722 10:47:17.183581   24174 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 10:47:17.183602   24174 kubeadm.go:310] 
	I0722 10:47:17.183658   24174 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 10:47:17.183743   24174 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 10:47:17.183807   24174 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 10:47:17.183817   24174 kubeadm.go:310] 
	I0722 10:47:17.183874   24174 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 10:47:17.183882   24174 kubeadm.go:310] 
	I0722 10:47:17.183931   24174 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 10:47:17.183943   24174 kubeadm.go:310] 
	I0722 10:47:17.184017   24174 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 10:47:17.184117   24174 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 10:47:17.184213   24174 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 10:47:17.184222   24174 kubeadm.go:310] 
	I0722 10:47:17.184329   24174 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 10:47:17.184451   24174 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 10:47:17.184461   24174 kubeadm.go:310] 
	I0722 10:47:17.184574   24174 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vshj1k.w5z6g3thto8ie6ws \
	I0722 10:47:17.184671   24174 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 10:47:17.184706   24174 kubeadm.go:310] 	--control-plane 
	I0722 10:47:17.184715   24174 kubeadm.go:310] 
	I0722 10:47:17.184811   24174 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 10:47:17.184821   24174 kubeadm.go:310] 
	I0722 10:47:17.184931   24174 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vshj1k.w5z6g3thto8ie6ws \
	I0722 10:47:17.185062   24174 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 10:47:17.185425   24174 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 10:47:17.185529   24174 cni.go:84] Creating CNI manager for ""
	I0722 10:47:17.185541   24174 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 10:47:17.187201   24174 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 10:47:17.188698   24174 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 10:47:17.193996   24174 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 10:47:17.194014   24174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 10:47:17.213441   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 10:47:17.541309   24174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 10:47:17.541424   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:17.541438   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283 minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=true
	I0722 10:47:17.626544   24174 ops.go:34] apiserver oom_adj: -16
	I0722 10:47:17.739150   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:18.239341   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:18.739971   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:19.239223   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:19.739929   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:20.239205   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:20.740153   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:21.239728   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:21.739417   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:22.239578   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:22.739898   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:23.239783   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:23.739199   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:24.239890   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:24.739999   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:25.240078   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:25.739538   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:26.239601   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:26.740161   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:27.239192   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:27.739566   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:28.239592   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:28.739245   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.239826   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.740137   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 10:47:29.853675   24174 kubeadm.go:1113] duration metric: took 12.312302949s to wait for elevateKubeSystemPrivileges
	I0722 10:47:29.853709   24174 kubeadm.go:394] duration metric: took 23.808652025s to StartCluster
	I0722 10:47:29.853731   24174 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:29.853815   24174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:47:29.854481   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:29.854675   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 10:47:29.854683   24174 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:47:29.854700   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:47:29.854707   24174 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 10:47:29.854750   24174 addons.go:69] Setting storage-provisioner=true in profile "ha-461283"
	I0722 10:47:29.854764   24174 addons.go:69] Setting default-storageclass=true in profile "ha-461283"
	I0722 10:47:29.854785   24174 addons.go:234] Setting addon storage-provisioner=true in "ha-461283"
	I0722 10:47:29.854799   24174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-461283"
	I0722 10:47:29.854826   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:29.854882   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:29.855183   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.855192   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.855221   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.855225   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.870374   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I0722 10:47:29.870374   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0722 10:47:29.870774   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.870902   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.871418   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.871433   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.871552   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.871574   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.871790   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.871884   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.871982   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.872425   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.872468   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.874094   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:47:29.874430   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 10:47:29.875043   24174 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 10:47:29.875248   24174 addons.go:234] Setting addon default-storageclass=true in "ha-461283"
	I0722 10:47:29.875292   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:29.875696   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.875782   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.888593   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I0722 10:47:29.889131   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.889610   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.889636   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.889944   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.890112   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.890380   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0722 10:47:29.890699   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.891177   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.891201   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.891515   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.891823   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:29.892091   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:29.892116   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:29.893436   24174 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 10:47:29.894507   24174 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:47:29.894528   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 10:47:29.894545   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:29.897345   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.897750   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:29.897784   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.897894   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:29.898065   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:29.898214   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:29.898369   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:29.907642   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0722 10:47:29.908113   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:29.908649   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:29.908672   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:29.908961   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:29.909124   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:29.910601   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:29.910788   24174 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 10:47:29.910802   24174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 10:47:29.910813   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:29.913649   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.914042   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:29.914068   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:29.914202   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:29.914360   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:29.914541   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:29.914672   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:30.053761   24174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 10:47:30.063293   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 10:47:30.073055   24174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 10:47:30.758246   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758275   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758283   24174 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0722 10:47:30.758365   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758384   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758557   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.758600   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758606   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758609   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.758621   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.758630   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.758640   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758702   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.758730   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758756   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.758986   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.758999   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.759015   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.759047   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.759084   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.759168   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.759215   24174 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0722 10:47:30.759230   24174 round_trippers.go:469] Request Headers:
	I0722 10:47:30.759241   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:47:30.759249   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:47:30.768796   24174 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 10:47:30.769486   24174 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0722 10:47:30.769502   24174 round_trippers.go:469] Request Headers:
	I0722 10:47:30.769513   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:47:30.769520   24174 round_trippers.go:473]     Content-Type: application/json
	I0722 10:47:30.769526   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:47:30.779730   24174 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 10:47:30.779913   24174 main.go:141] libmachine: Making call to close driver server
	I0722 10:47:30.779930   24174 main.go:141] libmachine: (ha-461283) Calling .Close
	I0722 10:47:30.780192   24174 main.go:141] libmachine: Successfully made call to close driver server
	I0722 10:47:30.780201   24174 main.go:141] libmachine: (ha-461283) DBG | Closing plugin on server side
	I0722 10:47:30.780210   24174 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 10:47:30.781798   24174 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 10:47:30.783086   24174 addons.go:510] duration metric: took 928.374319ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 10:47:30.783124   24174 start.go:246] waiting for cluster config update ...
	I0722 10:47:30.783139   24174 start.go:255] writing updated cluster config ...
	I0722 10:47:30.784700   24174 out.go:177] 
	I0722 10:47:30.786021   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:30.786099   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:30.787696   24174 out.go:177] * Starting "ha-461283-m02" control-plane node in "ha-461283" cluster
	I0722 10:47:30.788917   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:47:30.788938   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:47:30.789021   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:47:30.789033   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:47:30.789107   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:30.789324   24174 start.go:360] acquireMachinesLock for ha-461283-m02: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:47:30.789373   24174 start.go:364] duration metric: took 28.905µs to acquireMachinesLock for "ha-461283-m02"
	I0722 10:47:30.789395   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:47:30.789475   24174 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0722 10:47:30.790912   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:47:30.790995   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:30.791017   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:30.809793   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0722 10:47:30.810272   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:30.810808   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:30.810835   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:30.811186   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:30.811360   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:30.811512   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:30.811655   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:47:30.811681   24174 client.go:168] LocalClient.Create starting
	I0722 10:47:30.811713   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:47:30.811753   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:47:30.811772   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:47:30.811832   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:47:30.811856   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:47:30.811890   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:47:30.811914   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:47:30.811925   24174 main.go:141] libmachine: (ha-461283-m02) Calling .PreCreateCheck
	I0722 10:47:30.812066   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:30.812481   24174 main.go:141] libmachine: Creating machine...
	I0722 10:47:30.812494   24174 main.go:141] libmachine: (ha-461283-m02) Calling .Create
	I0722 10:47:30.812621   24174 main.go:141] libmachine: (ha-461283-m02) Creating KVM machine...
	I0722 10:47:30.813690   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found existing default KVM network
	I0722 10:47:30.813811   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found existing private KVM network mk-ha-461283
	I0722 10:47:30.813956   24174 main.go:141] libmachine: (ha-461283-m02) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 ...
	I0722 10:47:30.813977   24174 main.go:141] libmachine: (ha-461283-m02) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:47:30.814022   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:30.813934   24559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:47:30.814143   24174 main.go:141] libmachine: (ha-461283-m02) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:47:31.053571   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.053413   24559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa...
	I0722 10:47:31.215683   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.215590   24559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/ha-461283-m02.rawdisk...
	I0722 10:47:31.215720   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Writing magic tar header
	I0722 10:47:31.215731   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Writing SSH key tar header
	I0722 10:47:31.215811   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:31.215737   24559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 ...
	I0722 10:47:31.215875   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02
	I0722 10:47:31.215902   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02 (perms=drwx------)
	I0722 10:47:31.215919   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:47:31.215934   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:47:31.215949   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:47:31.215962   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:47:31.215974   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:47:31.215983   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:47:31.216071   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:47:31.216107   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Checking permissions on dir: /home
	I0722 10:47:31.216118   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:47:31.216125   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:47:31.216136   24174 main.go:141] libmachine: (ha-461283-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:47:31.216151   24174 main.go:141] libmachine: (ha-461283-m02) Creating domain...
	I0722 10:47:31.216164   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Skipping /home - not owner
	I0722 10:47:31.216977   24174 main.go:141] libmachine: (ha-461283-m02) define libvirt domain using xml: 
	I0722 10:47:31.216992   24174 main.go:141] libmachine: (ha-461283-m02) <domain type='kvm'>
	I0722 10:47:31.217001   24174 main.go:141] libmachine: (ha-461283-m02)   <name>ha-461283-m02</name>
	I0722 10:47:31.217009   24174 main.go:141] libmachine: (ha-461283-m02)   <memory unit='MiB'>2200</memory>
	I0722 10:47:31.217028   24174 main.go:141] libmachine: (ha-461283-m02)   <vcpu>2</vcpu>
	I0722 10:47:31.217039   24174 main.go:141] libmachine: (ha-461283-m02)   <features>
	I0722 10:47:31.217048   24174 main.go:141] libmachine: (ha-461283-m02)     <acpi/>
	I0722 10:47:31.217057   24174 main.go:141] libmachine: (ha-461283-m02)     <apic/>
	I0722 10:47:31.217065   24174 main.go:141] libmachine: (ha-461283-m02)     <pae/>
	I0722 10:47:31.217078   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217091   24174 main.go:141] libmachine: (ha-461283-m02)   </features>
	I0722 10:47:31.217101   24174 main.go:141] libmachine: (ha-461283-m02)   <cpu mode='host-passthrough'>
	I0722 10:47:31.217110   24174 main.go:141] libmachine: (ha-461283-m02)   
	I0722 10:47:31.217114   24174 main.go:141] libmachine: (ha-461283-m02)   </cpu>
	I0722 10:47:31.217120   24174 main.go:141] libmachine: (ha-461283-m02)   <os>
	I0722 10:47:31.217124   24174 main.go:141] libmachine: (ha-461283-m02)     <type>hvm</type>
	I0722 10:47:31.217130   24174 main.go:141] libmachine: (ha-461283-m02)     <boot dev='cdrom'/>
	I0722 10:47:31.217147   24174 main.go:141] libmachine: (ha-461283-m02)     <boot dev='hd'/>
	I0722 10:47:31.217161   24174 main.go:141] libmachine: (ha-461283-m02)     <bootmenu enable='no'/>
	I0722 10:47:31.217170   24174 main.go:141] libmachine: (ha-461283-m02)   </os>
	I0722 10:47:31.217176   24174 main.go:141] libmachine: (ha-461283-m02)   <devices>
	I0722 10:47:31.217187   24174 main.go:141] libmachine: (ha-461283-m02)     <disk type='file' device='cdrom'>
	I0722 10:47:31.217197   24174 main.go:141] libmachine: (ha-461283-m02)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/boot2docker.iso'/>
	I0722 10:47:31.217210   24174 main.go:141] libmachine: (ha-461283-m02)       <target dev='hdc' bus='scsi'/>
	I0722 10:47:31.217217   24174 main.go:141] libmachine: (ha-461283-m02)       <readonly/>
	I0722 10:47:31.217228   24174 main.go:141] libmachine: (ha-461283-m02)     </disk>
	I0722 10:47:31.217239   24174 main.go:141] libmachine: (ha-461283-m02)     <disk type='file' device='disk'>
	I0722 10:47:31.217273   24174 main.go:141] libmachine: (ha-461283-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:47:31.217309   24174 main.go:141] libmachine: (ha-461283-m02)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/ha-461283-m02.rawdisk'/>
	I0722 10:47:31.217330   24174 main.go:141] libmachine: (ha-461283-m02)       <target dev='hda' bus='virtio'/>
	I0722 10:47:31.217345   24174 main.go:141] libmachine: (ha-461283-m02)     </disk>
	I0722 10:47:31.217357   24174 main.go:141] libmachine: (ha-461283-m02)     <interface type='network'>
	I0722 10:47:31.217368   24174 main.go:141] libmachine: (ha-461283-m02)       <source network='mk-ha-461283'/>
	I0722 10:47:31.217378   24174 main.go:141] libmachine: (ha-461283-m02)       <model type='virtio'/>
	I0722 10:47:31.217387   24174 main.go:141] libmachine: (ha-461283-m02)     </interface>
	I0722 10:47:31.217399   24174 main.go:141] libmachine: (ha-461283-m02)     <interface type='network'>
	I0722 10:47:31.217409   24174 main.go:141] libmachine: (ha-461283-m02)       <source network='default'/>
	I0722 10:47:31.217419   24174 main.go:141] libmachine: (ha-461283-m02)       <model type='virtio'/>
	I0722 10:47:31.217431   24174 main.go:141] libmachine: (ha-461283-m02)     </interface>
	I0722 10:47:31.217442   24174 main.go:141] libmachine: (ha-461283-m02)     <serial type='pty'>
	I0722 10:47:31.217454   24174 main.go:141] libmachine: (ha-461283-m02)       <target port='0'/>
	I0722 10:47:31.217464   24174 main.go:141] libmachine: (ha-461283-m02)     </serial>
	I0722 10:47:31.217472   24174 main.go:141] libmachine: (ha-461283-m02)     <console type='pty'>
	I0722 10:47:31.217483   24174 main.go:141] libmachine: (ha-461283-m02)       <target type='serial' port='0'/>
	I0722 10:47:31.217493   24174 main.go:141] libmachine: (ha-461283-m02)     </console>
	I0722 10:47:31.217502   24174 main.go:141] libmachine: (ha-461283-m02)     <rng model='virtio'>
	I0722 10:47:31.217518   24174 main.go:141] libmachine: (ha-461283-m02)       <backend model='random'>/dev/random</backend>
	I0722 10:47:31.217529   24174 main.go:141] libmachine: (ha-461283-m02)     </rng>
	I0722 10:47:31.217539   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217547   24174 main.go:141] libmachine: (ha-461283-m02)     
	I0722 10:47:31.217558   24174 main.go:141] libmachine: (ha-461283-m02)   </devices>
	I0722 10:47:31.217569   24174 main.go:141] libmachine: (ha-461283-m02) </domain>
	I0722 10:47:31.217577   24174 main.go:141] libmachine: (ha-461283-m02) 
	I0722 10:47:31.223742   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:2e:15:a4 in network default
	I0722 10:47:31.224298   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring networks are active...
	I0722 10:47:31.224329   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:31.225166   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring network default is active
	I0722 10:47:31.225485   24174 main.go:141] libmachine: (ha-461283-m02) Ensuring network mk-ha-461283 is active
	I0722 10:47:31.225842   24174 main.go:141] libmachine: (ha-461283-m02) Getting domain xml...
	I0722 10:47:31.226695   24174 main.go:141] libmachine: (ha-461283-m02) Creating domain...
	I0722 10:47:32.436447   24174 main.go:141] libmachine: (ha-461283-m02) Waiting to get IP...
	I0722 10:47:32.437487   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:32.437934   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:32.437982   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:32.437904   24559 retry.go:31] will retry after 288.868303ms: waiting for machine to come up
	I0722 10:47:32.728315   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:32.728764   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:32.728790   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:32.728717   24559 retry.go:31] will retry after 378.239876ms: waiting for machine to come up
	I0722 10:47:33.108293   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:33.108869   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:33.108900   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:33.108798   24559 retry.go:31] will retry after 413.894738ms: waiting for machine to come up
	I0722 10:47:33.524142   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:33.524580   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:33.524608   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:33.524547   24559 retry.go:31] will retry after 555.748732ms: waiting for machine to come up
	I0722 10:47:34.082284   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:34.082731   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:34.082761   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:34.082690   24559 retry.go:31] will retry after 731.862289ms: waiting for machine to come up
	I0722 10:47:34.816601   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:34.817015   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:34.817044   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:34.816977   24559 retry.go:31] will retry after 770.464616ms: waiting for machine to come up
	I0722 10:47:35.588905   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:35.589391   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:35.589420   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:35.589332   24559 retry.go:31] will retry after 873.256858ms: waiting for machine to come up
	I0722 10:47:36.464080   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:36.464468   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:36.464495   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:36.464429   24559 retry.go:31] will retry after 1.402422875s: waiting for machine to come up
	I0722 10:47:37.868851   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:37.869255   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:37.869311   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:37.869226   24559 retry.go:31] will retry after 1.689037725s: waiting for machine to come up
	I0722 10:47:39.559985   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:39.560442   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:39.560496   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:39.560401   24559 retry.go:31] will retry after 1.943562609s: waiting for machine to come up
	I0722 10:47:41.505107   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:41.505555   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:41.505584   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:41.505507   24559 retry.go:31] will retry after 1.896819693s: waiting for machine to come up
	I0722 10:47:43.403486   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:43.403863   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:43.403905   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:43.403826   24559 retry.go:31] will retry after 2.894977506s: waiting for machine to come up
	I0722 10:47:46.300078   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:46.300472   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:46.300499   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:46.300430   24559 retry.go:31] will retry after 3.384903237s: waiting for machine to come up
	I0722 10:47:49.688927   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:49.689333   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find current IP address of domain ha-461283-m02 in network mk-ha-461283
	I0722 10:47:49.689359   24174 main.go:141] libmachine: (ha-461283-m02) DBG | I0722 10:47:49.689311   24559 retry.go:31] will retry after 5.437630652s: waiting for machine to come up
	I0722 10:47:55.132136   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.132633   24174 main.go:141] libmachine: (ha-461283-m02) Found IP for machine: 192.168.39.207
	I0722 10:47:55.132653   24174 main.go:141] libmachine: (ha-461283-m02) Reserving static IP address...
	I0722 10:47:55.132683   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has current primary IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.132979   24174 main.go:141] libmachine: (ha-461283-m02) DBG | unable to find host DHCP lease matching {name: "ha-461283-m02", mac: "52:54:00:a7:59:21", ip: "192.168.39.207"} in network mk-ha-461283
	I0722 10:47:55.200912   24174 main.go:141] libmachine: (ha-461283-m02) Reserved static IP address: 192.168.39.207
	I0722 10:47:55.200942   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Getting to WaitForSSH function...
	I0722 10:47:55.200950   24174 main.go:141] libmachine: (ha-461283-m02) Waiting for SSH to be available...
	I0722 10:47:55.203647   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.204124   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.204153   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.204285   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using SSH client type: external
	I0722 10:47:55.204304   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa (-rw-------)
	I0722 10:47:55.204335   24174 main.go:141] libmachine: (ha-461283-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:47:55.204346   24174 main.go:141] libmachine: (ha-461283-m02) DBG | About to run SSH command:
	I0722 10:47:55.204355   24174 main.go:141] libmachine: (ha-461283-m02) DBG | exit 0
	I0722 10:47:55.336397   24174 main.go:141] libmachine: (ha-461283-m02) DBG | SSH cmd err, output: <nil>: 
	I0722 10:47:55.336658   24174 main.go:141] libmachine: (ha-461283-m02) KVM machine creation complete!
	I0722 10:47:55.337055   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:55.337646   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:55.337831   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:55.338013   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:47:55.338028   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 10:47:55.339291   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:47:55.339307   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:47:55.339315   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:47:55.339323   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.341274   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.341603   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.341630   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.341766   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.341921   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.342054   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.342173   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.342322   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.342508   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.342521   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:47:55.451331   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:47:55.451352   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:47:55.451362   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.454013   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.454340   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.454367   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.454486   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.454653   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.454804   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.454945   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.455100   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.455300   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.455316   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:47:55.568915   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:47:55.568993   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:47:55.569006   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:47:55.569016   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.569242   24174 buildroot.go:166] provisioning hostname "ha-461283-m02"
	I0722 10:47:55.569279   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.569456   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.572113   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.572438   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.572473   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.572633   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.572799   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.572944   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.573063   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.573178   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.573346   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.573357   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283-m02 && echo "ha-461283-m02" | sudo tee /etc/hostname
	I0722 10:47:55.699778   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283-m02
	
	I0722 10:47:55.699804   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.702298   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.702649   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.702682   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.702857   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.703007   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.703129   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.703262   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.703472   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:55.703679   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:55.703696   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:47:55.826649   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:47:55.826674   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:47:55.826688   24174 buildroot.go:174] setting up certificates
	I0722 10:47:55.826697   24174 provision.go:84] configureAuth start
	I0722 10:47:55.826704   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetMachineName
	I0722 10:47:55.826918   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:55.829420   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.829755   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.829778   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.829941   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.831732   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.831950   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.831979   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.832071   24174 provision.go:143] copyHostCerts
	I0722 10:47:55.832099   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:47:55.832138   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:47:55.832150   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:47:55.832224   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:47:55.832367   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:47:55.832405   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:47:55.832415   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:47:55.832455   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:47:55.832504   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:47:55.832520   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:47:55.832526   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:47:55.832550   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:47:55.832600   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283-m02 san=[127.0.0.1 192.168.39.207 ha-461283-m02 localhost minikube]
	I0722 10:47:55.977172   24174 provision.go:177] copyRemoteCerts
	I0722 10:47:55.977222   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:47:55.977240   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:55.979482   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.979780   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:55.979802   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:55.980017   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:55.980213   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:55.980399   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:55.980536   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.066264   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:47:56.066328   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:47:56.093525   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:47:56.093586   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:47:56.117413   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:47:56.117466   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:47:56.140595   24174 provision.go:87] duration metric: took 313.886457ms to configureAuth
	I0722 10:47:56.140619   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:47:56.140767   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:56.140832   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.143335   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.143698   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.143720   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.143924   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.144091   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.144255   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.144375   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.144547   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:56.144729   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:56.144746   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:47:56.435279   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:47:56.435307   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:47:56.435317   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetURL
	I0722 10:47:56.436836   24174 main.go:141] libmachine: (ha-461283-m02) DBG | Using libvirt version 6000000
	I0722 10:47:56.439630   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.440017   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.440039   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.440229   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:47:56.440245   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:47:56.440252   24174 client.go:171] duration metric: took 25.62856269s to LocalClient.Create
	I0722 10:47:56.440274   24174 start.go:167] duration metric: took 25.628621079s to libmachine.API.Create "ha-461283"
	I0722 10:47:56.440281   24174 start.go:293] postStartSetup for "ha-461283-m02" (driver="kvm2")
	I0722 10:47:56.440291   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:47:56.440316   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.440572   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:47:56.440592   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.442760   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.443071   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.443089   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.443242   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.443419   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.443584   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.443733   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.531078   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:47:56.535593   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:47:56.535623   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:47:56.535718   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:47:56.535867   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:47:56.535882   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:47:56.536006   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:47:56.544961   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:56.569042   24174 start.go:296] duration metric: took 128.750355ms for postStartSetup
	I0722 10:47:56.569083   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetConfigRaw
	I0722 10:47:56.569663   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:56.572431   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.572805   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.572833   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.573025   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:47:56.573231   24174 start.go:128] duration metric: took 25.783745658s to createHost
	I0722 10:47:56.573252   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.575374   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.575698   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.575729   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.575869   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.576111   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.576293   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.576435   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.576596   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:47:56.576743   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0722 10:47:56.576753   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:47:56.688833   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645276.666438491
	
	I0722 10:47:56.688860   24174 fix.go:216] guest clock: 1721645276.666438491
	I0722 10:47:56.688871   24174 fix.go:229] Guest: 2024-07-22 10:47:56.666438491 +0000 UTC Remote: 2024-07-22 10:47:56.573243102 +0000 UTC m=+78.419788115 (delta=93.195389ms)
	I0722 10:47:56.688895   24174 fix.go:200] guest clock delta is within tolerance: 93.195389ms
	I0722 10:47:56.688906   24174 start.go:83] releasing machines lock for "ha-461283-m02", held for 25.899520813s
	I0722 10:47:56.688934   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.689186   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:56.691616   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.691947   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.691967   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.694066   24174 out.go:177] * Found network options:
	I0722 10:47:56.695515   24174 out.go:177]   - NO_PROXY=192.168.39.43
	W0722 10:47:56.696822   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:47:56.696863   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697471   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697647   24174 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 10:47:56.697743   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:47:56.697784   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	W0722 10:47:56.697811   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:47:56.697891   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:47:56.697911   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 10:47:56.700410   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700648   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700725   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.700748   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.700879   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.700998   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:56.701022   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:56.701027   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.701180   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.701211   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 10:47:56.701349   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.701360   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 10:47:56.701517   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 10:47:56.701627   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 10:47:56.948845   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:47:56.956269   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:47:56.956331   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:47:56.973374   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:47:56.973391   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:47:56.973435   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:47:56.992989   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:47:57.009902   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:47:57.009961   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:47:57.025982   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:47:57.039149   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:47:57.149910   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:47:57.330760   24174 docker.go:233] disabling docker service ...
	I0722 10:47:57.330834   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:47:57.344563   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:47:57.357536   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:47:57.475780   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:47:57.594265   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:47:57.609478   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:47:57.627377   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:47:57.627437   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.637202   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:47:57.637266   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.647096   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.656843   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.666504   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:47:57.676452   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.687046   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.703812   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:47:57.713671   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:47:57.722303   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:47:57.722349   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:47:57.734915   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:47:57.743900   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:47:57.855952   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:47:58.001609   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:47:58.001687   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:47:58.006322   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:47:58.006370   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:47:58.010146   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:47:58.050516   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:47:58.050584   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:58.079421   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:47:58.109795   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:47:58.111043   24174 out.go:177]   - env NO_PROXY=192.168.39.43
	I0722 10:47:58.112290   24174 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 10:47:58.114875   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:58.115259   24174 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:47:45 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 10:47:58.115281   24174 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 10:47:58.115505   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:47:58.119902   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:47:58.132811   24174 mustload.go:65] Loading cluster: ha-461283
	I0722 10:47:58.133021   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:47:58.133298   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:58.133321   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:58.147456   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0722 10:47:58.147842   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:58.148301   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:58.148323   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:58.148580   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:58.148755   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:47:58.150177   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:47:58.150449   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:47:58.150473   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:47:58.164905   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0722 10:47:58.165296   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:47:58.165714   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:47:58.165733   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:47:58.166057   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:47:58.166245   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:47:58.166410   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.207
	I0722 10:47:58.166422   24174 certs.go:194] generating shared ca certs ...
	I0722 10:47:58.166437   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.166581   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:47:58.166637   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:47:58.166650   24174 certs.go:256] generating profile certs ...
	I0722 10:47:58.166742   24174 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:47:58.166772   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc
	I0722 10:47:58.166791   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.254]
	I0722 10:47:58.429254   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc ...
	I0722 10:47:58.429281   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc: {Name:mk8a97d59811d83ad3be1c8b591fda17bff6b927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.429437   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc ...
	I0722 10:47:58.429449   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc: {Name:mk595f26bd56e36f899c39440569455e9ebee967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:47:58.429522   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.40161ecc -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:47:58.429645   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.40161ecc -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:47:58.429766   24174 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:47:58.429781   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:47:58.429792   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:47:58.429805   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:47:58.429817   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:47:58.429829   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:47:58.429841   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:47:58.429852   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:47:58.429862   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:47:58.429916   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:47:58.429942   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:47:58.429951   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:47:58.429972   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:47:58.429992   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:47:58.430011   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:47:58.430045   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:47:58.430069   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.430082   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:58.430095   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:47:58.430123   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:47:58.432873   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:58.433194   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:47:58.433234   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:47:58.433383   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:47:58.433570   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:47:58.433710   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:47:58.433814   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:47:58.504804   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 10:47:58.511555   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 10:47:58.522897   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 10:47:58.526822   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0722 10:47:58.536795   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 10:47:58.541569   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 10:47:58.551799   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 10:47:58.555654   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 10:47:58.566538   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 10:47:58.570378   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 10:47:58.580400   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 10:47:58.584220   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0722 10:47:58.594107   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:47:58.620357   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:47:58.643801   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:47:58.665955   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:47:58.689217   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 10:47:58.713811   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:47:58.737216   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:47:58.760447   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:47:58.784915   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:47:58.808121   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:47:58.830169   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:47:58.852391   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 10:47:58.868323   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0722 10:47:58.884320   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 10:47:58.899981   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 10:47:58.915490   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 10:47:58.931428   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0722 10:47:58.946940   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 10:47:58.962309   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:47:58.968094   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:47:58.978946   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.983292   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.983337   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:47:58.989083   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:47:59.000666   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:47:59.011239   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.015698   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.015755   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:47:59.021600   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:47:59.032920   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:47:59.045008   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.049309   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.049358   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:47:59.054801   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:47:59.065399   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:47:59.069327   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:47:59.069381   24174 kubeadm.go:934] updating node {m02 192.168.39.207 8443 v1.30.3 crio true true} ...
	I0722 10:47:59.069465   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:47:59.069491   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:47:59.069525   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:47:59.086122   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:47:59.086186   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:47:59.086228   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:59.095586   24174 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 10:47:59.095645   24174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 10:47:59.104618   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 10:47:59.104642   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:47:59.104708   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:47:59.104733   24174 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0722 10:47:59.104767   24174 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0722 10:47:59.108710   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 10:47:59.108735   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 10:47:59.745705   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:47:59.745789   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:47:59.751699   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 10:47:59.751726   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 10:48:00.580026   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:48:00.596841   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:48:00.596944   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:48:00.601828   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 10:48:00.601861   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 10:48:01.011204   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 10:48:01.020993   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:48:01.037349   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:48:01.053880   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:48:01.069804   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:48:01.073484   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:48:01.085558   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:48:01.205840   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:48:01.222485   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:48:01.222954   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:48:01.222989   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:48:01.238394   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0722 10:48:01.238873   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:48:01.239358   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:48:01.239385   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:48:01.239718   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:48:01.239938   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:48:01.240150   24174 start.go:317] joinCluster: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:48:01.240274   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 10:48:01.240300   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:48:01.243159   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:48:01.243499   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:48:01.243525   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:48:01.243693   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:48:01.243866   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:48:01.244131   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:48:01.244331   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:48:01.411425   24174 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:01.411471   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ypay92.mav1gf1d3e8n4m1h --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m02 --control-plane --apiserver-advertise-address=192.168.39.207 --apiserver-bind-port=8443"
	I0722 10:48:24.691472   24174 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ypay92.mav1gf1d3e8n4m1h --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m02 --control-plane --apiserver-advertise-address=192.168.39.207 --apiserver-bind-port=8443": (23.279975288s)
	I0722 10:48:24.691512   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 10:48:25.300884   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283-m02 minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=false
	I0722 10:48:25.436971   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-461283-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 10:48:25.550787   24174 start.go:319] duration metric: took 24.310634091s to joinCluster
	I0722 10:48:25.550873   24174 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:25.551125   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:48:25.552212   24174 out.go:177] * Verifying Kubernetes components...
	I0722 10:48:25.553610   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:48:25.799483   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:48:25.843019   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:48:25.843284   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 10:48:25.843363   24174 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.43:8443
	I0722 10:48:25.843574   24174 node_ready.go:35] waiting up to 6m0s for node "ha-461283-m02" to be "Ready" ...
	I0722 10:48:25.843644   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:25.843652   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:25.843659   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:25.843662   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:25.863242   24174 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0722 10:48:26.344774   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:26.344800   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:26.344811   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:26.344817   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:26.353893   24174 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 10:48:26.843989   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:26.844014   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:26.844023   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:26.844028   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:26.852768   24174 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 10:48:27.344744   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:27.344763   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:27.344770   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:27.344775   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:27.350729   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:27.844036   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:27.844059   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:27.844068   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:27.844073   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:27.847268   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:27.848029   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:28.343700   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:28.343720   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:28.343730   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:28.343734   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:28.346747   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:28.844668   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:28.844693   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:28.844703   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:28.844709   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:28.847359   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:29.344402   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:29.344422   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:29.344429   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:29.344434   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:29.347445   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:29.844245   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:29.844267   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:29.844279   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:29.844286   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:29.846563   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:30.343961   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:30.343985   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:30.343995   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:30.344002   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:30.346955   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:30.347453   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:30.843716   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:30.843734   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:30.843741   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:30.843744   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:30.846470   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:31.344014   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:31.344036   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:31.344047   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:31.344051   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:31.347040   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:31.844063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:31.844083   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:31.844091   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:31.844095   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:31.846983   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:32.343831   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:32.343855   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:32.343862   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:32.343866   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:32.347186   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:32.347771   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:32.844046   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:32.844068   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:32.844076   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:32.844081   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:32.848142   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:48:33.344485   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:33.344516   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:33.344523   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:33.344527   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:33.348125   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:33.844085   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:33.844111   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:33.844123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:33.844130   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:33.846798   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:34.343783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:34.343805   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:34.343816   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:34.343823   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:34.346974   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:34.347904   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:34.844249   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:34.844270   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:34.844278   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:34.844281   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:34.847398   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:35.344451   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:35.344473   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:35.344481   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:35.344484   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:35.347665   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:35.844077   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:35.844102   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:35.844114   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:35.844118   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:35.847177   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:36.344643   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:36.344665   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:36.344676   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:36.344681   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:36.348405   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:36.348982   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:36.844453   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:36.844474   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:36.844482   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:36.844486   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:36.848497   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:37.344572   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:37.344599   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:37.344610   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:37.344616   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:37.348269   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:37.844700   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:37.844723   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:37.844734   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:37.844740   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:37.847962   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:38.343890   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:38.343910   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:38.343918   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:38.343923   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:38.347069   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:38.844482   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:38.844507   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:38.844519   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:38.844527   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:38.847362   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:38.847891   24174 node_ready.go:53] node "ha-461283-m02" has status "Ready":"False"
	I0722 10:48:39.344180   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.344205   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.344213   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.344218   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.347692   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:39.844660   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.844683   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.844692   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.844698   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.847829   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:39.848459   24174 node_ready.go:49] node "ha-461283-m02" has status "Ready":"True"
	I0722 10:48:39.848477   24174 node_ready.go:38] duration metric: took 14.004887367s for node "ha-461283-m02" to be "Ready" ...
	I0722 10:48:39.848485   24174 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:48:39.848534   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:39.848543   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.848550   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.848553   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.852902   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:48:39.859233   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.859290   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qrfdd
	I0722 10:48:39.859298   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.859306   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.859310   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.861613   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.862209   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.862223   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.862230   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.862234   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.864043   24174 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 10:48:39.864695   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.864709   24174 pod_ready.go:81] duration metric: took 5.457806ms for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.864716   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.864754   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zb547
	I0722 10:48:39.864761   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.864767   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.864770   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.867561   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.868547   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.868560   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.868567   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.868571   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.870916   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.871417   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.871431   24174 pod_ready.go:81] duration metric: took 6.70921ms for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.871438   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.871489   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283
	I0722 10:48:39.871500   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.871510   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.871515   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.873780   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.874369   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:39.874384   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.874393   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.874399   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.876544   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.877280   24174 pod_ready.go:92] pod "etcd-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:39.877293   24174 pod_ready.go:81] duration metric: took 5.849097ms for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.877299   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:39.877345   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:39.877354   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.877361   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.877364   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.879962   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:39.880946   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:39.880959   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:39.880968   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:39.880974   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:39.887680   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:48:40.377819   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:40.377850   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.377858   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.377865   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.381180   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:40.381693   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:40.381706   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.381715   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.381719   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.384712   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:40.878063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:48:40.878084   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.878092   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.878100   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.881150   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:40.881934   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:40.881948   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.881956   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.881959   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.884560   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:40.885088   24174 pod_ready.go:92] pod "etcd-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:40.885107   24174 pod_ready.go:81] duration metric: took 1.007801941s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:40.885127   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:40.885171   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:48:40.885178   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:40.885186   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:40.885189   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:40.887601   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:41.045448   24174 request.go:629] Waited for 157.314344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.045509   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.045517   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.045527   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.045546   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.048170   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:41.048948   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.048962   24174 pod_ready.go:81] duration metric: took 163.829366ms for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.048973   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.245382   24174 request.go:629] Waited for 196.340468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:48:41.245436   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:48:41.245443   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.245470   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.245476   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.248579   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.444648   24174 request.go:629] Waited for 195.12048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:41.444729   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:41.444736   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.444746   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.444753   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.448003   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.448735   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.448753   24174 pod_ready.go:81] duration metric: took 399.770264ms for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.448762   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.644986   24174 request.go:629] Waited for 196.107358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:48:41.645039   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:48:41.645046   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.645056   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.645064   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.648469   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:41.845599   24174 request.go:629] Waited for 196.436498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.845831   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:41.845844   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:41.845856   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:41.845868   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:41.850996   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:41.852097   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:41.852116   24174 pod_ready.go:81] duration metric: took 403.346955ms for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:41.852129   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.045145   24174 request.go:629] Waited for 192.95325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:48:42.045239   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:48:42.045251   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.045258   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.045264   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.047596   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.245453   24174 request.go:629] Waited for 197.350124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:42.245528   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:42.245539   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.245551   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.245559   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.248372   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.248862   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:42.248880   24174 pod_ready.go:81] duration metric: took 396.744128ms for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.248890   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.445033   24174 request.go:629] Waited for 196.085737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:48:42.445106   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:48:42.445116   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.445123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.445128   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.448498   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:42.645624   24174 request.go:629] Waited for 196.365494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:42.645673   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:42.645678   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.645685   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.645690   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.648527   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:42.649134   24174 pod_ready.go:92] pod "kube-proxy-28zxf" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:42.649151   24174 pod_ready.go:81] duration metric: took 400.253951ms for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.649160   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:42.845279   24174 request.go:629] Waited for 196.062558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:48:42.845384   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:48:42.845395   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:42.845406   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:42.845416   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:42.849246   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.044710   24174 request.go:629] Waited for 194.2934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.044777   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.044783   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.044790   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.044797   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.047731   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:43.048289   24174 pod_ready.go:92] pod "kube-proxy-xkbsx" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.048307   24174 pod_ready.go:81] duration metric: took 399.140003ms for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.048318   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.245697   24174 request.go:629] Waited for 197.316846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:48:43.245778   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:48:43.245788   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.245800   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.245811   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.249114   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.445283   24174 request.go:629] Waited for 195.497705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:43.445351   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:48:43.445359   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.445369   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.445374   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.448694   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.449520   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.449537   24174 pod_ready.go:81] duration metric: took 401.211193ms for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.449546   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.645724   24174 request.go:629] Waited for 196.109694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:48:43.645794   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:48:43.645802   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.645813   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.645822   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.649328   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:43.845450   24174 request.go:629] Waited for 195.380755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.845521   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:48:43.845528   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.845537   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.845543   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.848353   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:43.848987   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:48:43.849004   24174 pod_ready.go:81] duration metric: took 399.45262ms for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:48:43.849014   24174 pod_ready.go:38] duration metric: took 4.000520366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:48:43.849029   24174 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:48:43.849081   24174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:48:43.864186   24174 api_server.go:72] duration metric: took 18.313277926s to wait for apiserver process to appear ...
	I0722 10:48:43.864203   24174 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:48:43.864217   24174 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0722 10:48:43.868250   24174 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0722 10:48:43.868313   24174 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I0722 10:48:43.868324   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:43.868334   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:43.868345   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:43.869157   24174 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 10:48:43.869256   24174 api_server.go:141] control plane version: v1.30.3
	I0722 10:48:43.869274   24174 api_server.go:131] duration metric: took 5.065194ms to wait for apiserver health ...
	I0722 10:48:43.869284   24174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:48:44.045535   24174 request.go:629] Waited for 176.181322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.045588   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.045593   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.045601   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.045606   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.053011   24174 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 10:48:44.059851   24174 system_pods.go:59] 17 kube-system pods found
	I0722 10:48:44.059876   24174 system_pods.go:61] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:48:44.059881   24174 system_pods.go:61] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:48:44.059885   24174 system_pods.go:61] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:48:44.059888   24174 system_pods.go:61] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:48:44.059892   24174 system_pods.go:61] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:48:44.059895   24174 system_pods.go:61] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:48:44.059898   24174 system_pods.go:61] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:48:44.059901   24174 system_pods.go:61] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:48:44.059904   24174 system_pods.go:61] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:48:44.059907   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:48:44.059910   24174 system_pods.go:61] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:48:44.059913   24174 system_pods.go:61] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:48:44.059916   24174 system_pods.go:61] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:48:44.059919   24174 system_pods.go:61] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:48:44.059921   24174 system_pods.go:61] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:48:44.059926   24174 system_pods.go:61] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:48:44.059928   24174 system_pods.go:61] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:48:44.059933   24174 system_pods.go:74] duration metric: took 190.641674ms to wait for pod list to return data ...
	I0722 10:48:44.059943   24174 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:48:44.245377   24174 request.go:629] Waited for 185.370785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:48:44.245427   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:48:44.245432   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.245438   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.245442   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.248417   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:48:44.248661   24174 default_sa.go:45] found service account: "default"
	I0722 10:48:44.248679   24174 default_sa.go:55] duration metric: took 188.730585ms for default service account to be created ...
	I0722 10:48:44.248688   24174 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:48:44.444934   24174 request.go:629] Waited for 196.187287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.445012   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:48:44.445017   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.445025   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.445032   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.450361   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:48:44.457320   24174 system_pods.go:86] 17 kube-system pods found
	I0722 10:48:44.457343   24174 system_pods.go:89] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:48:44.457348   24174 system_pods.go:89] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:48:44.457353   24174 system_pods.go:89] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:48:44.457357   24174 system_pods.go:89] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:48:44.457361   24174 system_pods.go:89] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:48:44.457364   24174 system_pods.go:89] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:48:44.457369   24174 system_pods.go:89] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:48:44.457377   24174 system_pods.go:89] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:48:44.457385   24174 system_pods.go:89] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:48:44.457394   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:48:44.457401   24174 system_pods.go:89] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:48:44.457410   24174 system_pods.go:89] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:48:44.457414   24174 system_pods.go:89] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:48:44.457418   24174 system_pods.go:89] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:48:44.457421   24174 system_pods.go:89] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:48:44.457428   24174 system_pods.go:89] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:48:44.457431   24174 system_pods.go:89] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:48:44.457437   24174 system_pods.go:126] duration metric: took 208.742477ms to wait for k8s-apps to be running ...
	I0722 10:48:44.457446   24174 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:48:44.457492   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:48:44.472821   24174 system_svc.go:56] duration metric: took 15.367443ms WaitForService to wait for kubelet
	I0722 10:48:44.472846   24174 kubeadm.go:582] duration metric: took 18.921938085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:48:44.472866   24174 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:48:44.645244   24174 request.go:629] Waited for 172.313585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I0722 10:48:44.645304   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I0722 10:48:44.645324   24174 round_trippers.go:469] Request Headers:
	I0722 10:48:44.645335   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:48:44.645340   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:48:44.648848   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:48:44.649597   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:48:44.649616   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:48:44.649629   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:48:44.649635   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:48:44.649640   24174 node_conditions.go:105] duration metric: took 176.768458ms to run NodePressure ...
	I0722 10:48:44.649654   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:48:44.649689   24174 start.go:255] writing updated cluster config ...
	I0722 10:48:44.652165   24174 out.go:177] 
	I0722 10:48:44.653480   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:48:44.653578   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:48:44.655144   24174 out.go:177] * Starting "ha-461283-m03" control-plane node in "ha-461283" cluster
	I0722 10:48:44.656272   24174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:48:44.656289   24174 cache.go:56] Caching tarball of preloaded images
	I0722 10:48:44.656371   24174 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:48:44.656395   24174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:48:44.656479   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:48:44.656690   24174 start.go:360] acquireMachinesLock for ha-461283-m03: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:48:44.656729   24174 start.go:364] duration metric: took 22.177µs to acquireMachinesLock for "ha-461283-m03"
	I0722 10:48:44.656744   24174 start.go:93] Provisioning new machine with config: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:48:44.656824   24174 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0722 10:48:44.658312   24174 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 10:48:44.658378   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:48:44.658409   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:48:44.672972   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0722 10:48:44.673379   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:48:44.673764   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:48:44.673784   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:48:44.674099   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:48:44.674280   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:48:44.674434   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:48:44.674583   24174 start.go:159] libmachine.API.Create for "ha-461283" (driver="kvm2")
	I0722 10:48:44.674610   24174 client.go:168] LocalClient.Create starting
	I0722 10:48:44.674640   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 10:48:44.674673   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:48:44.674690   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:48:44.674753   24174 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 10:48:44.674778   24174 main.go:141] libmachine: Decoding PEM data...
	I0722 10:48:44.674791   24174 main.go:141] libmachine: Parsing certificate...
	I0722 10:48:44.674816   24174 main.go:141] libmachine: Running pre-create checks...
	I0722 10:48:44.674827   24174 main.go:141] libmachine: (ha-461283-m03) Calling .PreCreateCheck
	I0722 10:48:44.674986   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:48:44.675313   24174 main.go:141] libmachine: Creating machine...
	I0722 10:48:44.675329   24174 main.go:141] libmachine: (ha-461283-m03) Calling .Create
	I0722 10:48:44.675457   24174 main.go:141] libmachine: (ha-461283-m03) Creating KVM machine...
	I0722 10:48:44.676646   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found existing default KVM network
	I0722 10:48:44.676771   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found existing private KVM network mk-ha-461283
	I0722 10:48:44.676899   24174 main.go:141] libmachine: (ha-461283-m03) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 ...
	I0722 10:48:44.676920   24174 main.go:141] libmachine: (ha-461283-m03) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:48:44.676981   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:44.676896   24968 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:48:44.677054   24174 main.go:141] libmachine: (ha-461283-m03) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 10:48:44.916618   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:44.916520   24968 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa...
	I0722 10:48:45.260636   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:45.260508   24968 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/ha-461283-m03.rawdisk...
	I0722 10:48:45.260676   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Writing magic tar header
	I0722 10:48:45.260692   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Writing SSH key tar header
	I0722 10:48:45.260705   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:45.260651   24968 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 ...
	I0722 10:48:45.260791   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03
	I0722 10:48:45.260830   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 10:48:45.260856   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03 (perms=drwx------)
	I0722 10:48:45.260868   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:48:45.260885   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 10:48:45.260896   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 10:48:45.260909   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home/jenkins
	I0722 10:48:45.260924   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Checking permissions on dir: /home
	I0722 10:48:45.260937   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 10:48:45.260949   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Skipping /home - not owner
	I0722 10:48:45.260966   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 10:48:45.260981   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 10:48:45.260993   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 10:48:45.261006   24174 main.go:141] libmachine: (ha-461283-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 10:48:45.261016   24174 main.go:141] libmachine: (ha-461283-m03) Creating domain...
	I0722 10:48:45.261834   24174 main.go:141] libmachine: (ha-461283-m03) define libvirt domain using xml: 
	I0722 10:48:45.261858   24174 main.go:141] libmachine: (ha-461283-m03) <domain type='kvm'>
	I0722 10:48:45.261870   24174 main.go:141] libmachine: (ha-461283-m03)   <name>ha-461283-m03</name>
	I0722 10:48:45.261879   24174 main.go:141] libmachine: (ha-461283-m03)   <memory unit='MiB'>2200</memory>
	I0722 10:48:45.261892   24174 main.go:141] libmachine: (ha-461283-m03)   <vcpu>2</vcpu>
	I0722 10:48:45.261902   24174 main.go:141] libmachine: (ha-461283-m03)   <features>
	I0722 10:48:45.261912   24174 main.go:141] libmachine: (ha-461283-m03)     <acpi/>
	I0722 10:48:45.261922   24174 main.go:141] libmachine: (ha-461283-m03)     <apic/>
	I0722 10:48:45.261936   24174 main.go:141] libmachine: (ha-461283-m03)     <pae/>
	I0722 10:48:45.261946   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.261970   24174 main.go:141] libmachine: (ha-461283-m03)   </features>
	I0722 10:48:45.261991   24174 main.go:141] libmachine: (ha-461283-m03)   <cpu mode='host-passthrough'>
	I0722 10:48:45.261998   24174 main.go:141] libmachine: (ha-461283-m03)   
	I0722 10:48:45.262007   24174 main.go:141] libmachine: (ha-461283-m03)   </cpu>
	I0722 10:48:45.262016   24174 main.go:141] libmachine: (ha-461283-m03)   <os>
	I0722 10:48:45.262026   24174 main.go:141] libmachine: (ha-461283-m03)     <type>hvm</type>
	I0722 10:48:45.262034   24174 main.go:141] libmachine: (ha-461283-m03)     <boot dev='cdrom'/>
	I0722 10:48:45.262041   24174 main.go:141] libmachine: (ha-461283-m03)     <boot dev='hd'/>
	I0722 10:48:45.262047   24174 main.go:141] libmachine: (ha-461283-m03)     <bootmenu enable='no'/>
	I0722 10:48:45.262053   24174 main.go:141] libmachine: (ha-461283-m03)   </os>
	I0722 10:48:45.262059   24174 main.go:141] libmachine: (ha-461283-m03)   <devices>
	I0722 10:48:45.262066   24174 main.go:141] libmachine: (ha-461283-m03)     <disk type='file' device='cdrom'>
	I0722 10:48:45.262074   24174 main.go:141] libmachine: (ha-461283-m03)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/boot2docker.iso'/>
	I0722 10:48:45.262082   24174 main.go:141] libmachine: (ha-461283-m03)       <target dev='hdc' bus='scsi'/>
	I0722 10:48:45.262090   24174 main.go:141] libmachine: (ha-461283-m03)       <readonly/>
	I0722 10:48:45.262094   24174 main.go:141] libmachine: (ha-461283-m03)     </disk>
	I0722 10:48:45.262123   24174 main.go:141] libmachine: (ha-461283-m03)     <disk type='file' device='disk'>
	I0722 10:48:45.262158   24174 main.go:141] libmachine: (ha-461283-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 10:48:45.262178   24174 main.go:141] libmachine: (ha-461283-m03)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/ha-461283-m03.rawdisk'/>
	I0722 10:48:45.262190   24174 main.go:141] libmachine: (ha-461283-m03)       <target dev='hda' bus='virtio'/>
	I0722 10:48:45.262201   24174 main.go:141] libmachine: (ha-461283-m03)     </disk>
	I0722 10:48:45.262212   24174 main.go:141] libmachine: (ha-461283-m03)     <interface type='network'>
	I0722 10:48:45.262228   24174 main.go:141] libmachine: (ha-461283-m03)       <source network='mk-ha-461283'/>
	I0722 10:48:45.262240   24174 main.go:141] libmachine: (ha-461283-m03)       <model type='virtio'/>
	I0722 10:48:45.262251   24174 main.go:141] libmachine: (ha-461283-m03)     </interface>
	I0722 10:48:45.262263   24174 main.go:141] libmachine: (ha-461283-m03)     <interface type='network'>
	I0722 10:48:45.262272   24174 main.go:141] libmachine: (ha-461283-m03)       <source network='default'/>
	I0722 10:48:45.262284   24174 main.go:141] libmachine: (ha-461283-m03)       <model type='virtio'/>
	I0722 10:48:45.262294   24174 main.go:141] libmachine: (ha-461283-m03)     </interface>
	I0722 10:48:45.262303   24174 main.go:141] libmachine: (ha-461283-m03)     <serial type='pty'>
	I0722 10:48:45.262318   24174 main.go:141] libmachine: (ha-461283-m03)       <target port='0'/>
	I0722 10:48:45.262329   24174 main.go:141] libmachine: (ha-461283-m03)     </serial>
	I0722 10:48:45.262340   24174 main.go:141] libmachine: (ha-461283-m03)     <console type='pty'>
	I0722 10:48:45.262353   24174 main.go:141] libmachine: (ha-461283-m03)       <target type='serial' port='0'/>
	I0722 10:48:45.262362   24174 main.go:141] libmachine: (ha-461283-m03)     </console>
	I0722 10:48:45.262375   24174 main.go:141] libmachine: (ha-461283-m03)     <rng model='virtio'>
	I0722 10:48:45.262386   24174 main.go:141] libmachine: (ha-461283-m03)       <backend model='random'>/dev/random</backend>
	I0722 10:48:45.262396   24174 main.go:141] libmachine: (ha-461283-m03)     </rng>
	I0722 10:48:45.262408   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.262429   24174 main.go:141] libmachine: (ha-461283-m03)     
	I0722 10:48:45.262449   24174 main.go:141] libmachine: (ha-461283-m03)   </devices>
	I0722 10:48:45.262461   24174 main.go:141] libmachine: (ha-461283-m03) </domain>
	I0722 10:48:45.262470   24174 main.go:141] libmachine: (ha-461283-m03) 
	I0722 10:48:45.268874   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:3c:b5:d2 in network default
	I0722 10:48:45.269584   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:45.269612   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring networks are active...
	I0722 10:48:45.270240   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring network default is active
	I0722 10:48:45.270543   24174 main.go:141] libmachine: (ha-461283-m03) Ensuring network mk-ha-461283 is active
	I0722 10:48:45.270958   24174 main.go:141] libmachine: (ha-461283-m03) Getting domain xml...
	I0722 10:48:45.271633   24174 main.go:141] libmachine: (ha-461283-m03) Creating domain...
	I0722 10:48:46.475752   24174 main.go:141] libmachine: (ha-461283-m03) Waiting to get IP...
	I0722 10:48:46.476626   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:46.477027   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:46.477056   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:46.477000   24968 retry.go:31] will retry after 275.121113ms: waiting for machine to come up
	I0722 10:48:46.753462   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:46.754036   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:46.754057   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:46.753902   24968 retry.go:31] will retry after 295.674602ms: waiting for machine to come up
	I0722 10:48:47.052238   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:47.052694   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:47.052724   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:47.052655   24968 retry.go:31] will retry after 451.913479ms: waiting for machine to come up
	I0722 10:48:47.506397   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:47.506876   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:47.506907   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:47.506809   24968 retry.go:31] will retry after 519.604109ms: waiting for machine to come up
	I0722 10:48:48.028482   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:48.028944   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:48.028974   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:48.028893   24968 retry.go:31] will retry after 476.957069ms: waiting for machine to come up
	I0722 10:48:48.507575   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:48.508072   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:48.508116   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:48.508042   24968 retry.go:31] will retry after 608.903487ms: waiting for machine to come up
	I0722 10:48:49.118665   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:49.119083   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:49.119108   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:49.119052   24968 retry.go:31] will retry after 889.181468ms: waiting for machine to come up
	I0722 10:48:50.009468   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:50.009937   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:50.009966   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:50.009893   24968 retry.go:31] will retry after 1.279479167s: waiting for machine to come up
	I0722 10:48:51.291228   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:51.291716   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:51.291745   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:51.291668   24968 retry.go:31] will retry after 1.661195322s: waiting for machine to come up
	I0722 10:48:52.955409   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:52.955765   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:52.955794   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:52.955713   24968 retry.go:31] will retry after 1.546832146s: waiting for machine to come up
	I0722 10:48:54.504366   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:54.504902   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:54.504944   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:54.504835   24968 retry.go:31] will retry after 2.353682552s: waiting for machine to come up
	I0722 10:48:56.861727   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:48:56.862178   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:48:56.862203   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:48:56.862133   24968 retry.go:31] will retry after 3.158413013s: waiting for machine to come up
	I0722 10:49:00.022502   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:00.023022   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:49:00.023045   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:49:00.022979   24968 retry.go:31] will retry after 3.932718421s: waiting for machine to come up
	I0722 10:49:03.957718   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:03.958092   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find current IP address of domain ha-461283-m03 in network mk-ha-461283
	I0722 10:49:03.958118   24174 main.go:141] libmachine: (ha-461283-m03) DBG | I0722 10:49:03.958056   24968 retry.go:31] will retry after 4.074630574s: waiting for machine to come up
	I0722 10:49:08.036477   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.037005   24174 main.go:141] libmachine: (ha-461283-m03) Found IP for machine: 192.168.39.127
	I0722 10:49:08.037024   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.037032   24174 main.go:141] libmachine: (ha-461283-m03) Reserving static IP address...
	I0722 10:49:08.037433   24174 main.go:141] libmachine: (ha-461283-m03) DBG | unable to find host DHCP lease matching {name: "ha-461283-m03", mac: "52:54:00:03:8f:df", ip: "192.168.39.127"} in network mk-ha-461283
	I0722 10:49:08.107902   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Getting to WaitForSSH function...
	I0722 10:49:08.107932   24174 main.go:141] libmachine: (ha-461283-m03) Reserved static IP address: 192.168.39.127
	I0722 10:49:08.107945   24174 main.go:141] libmachine: (ha-461283-m03) Waiting for SSH to be available...
	I0722 10:49:08.110233   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.110734   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.110759   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.110912   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using SSH client type: external
	I0722 10:49:08.110932   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa (-rw-------)
	I0722 10:49:08.110974   24174 main.go:141] libmachine: (ha-461283-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 10:49:08.110988   24174 main.go:141] libmachine: (ha-461283-m03) DBG | About to run SSH command:
	I0722 10:49:08.111022   24174 main.go:141] libmachine: (ha-461283-m03) DBG | exit 0
	I0722 10:49:08.240542   24174 main.go:141] libmachine: (ha-461283-m03) DBG | SSH cmd err, output: <nil>: 
	I0722 10:49:08.240825   24174 main.go:141] libmachine: (ha-461283-m03) KVM machine creation complete!
	I0722 10:49:08.241178   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:49:08.241676   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:08.241876   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:08.242060   24174 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 10:49:08.242075   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:49:08.243399   24174 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 10:49:08.243416   24174 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 10:49:08.243423   24174 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 10:49:08.243432   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.245715   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.246100   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.246127   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.246283   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.246461   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.246581   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.246695   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.246820   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.247047   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.247061   24174 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 10:49:08.359512   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:49:08.359531   24174 main.go:141] libmachine: Detecting the provisioner...
	I0722 10:49:08.359538   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.362273   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.362612   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.362634   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.362798   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.362982   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.363160   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.363287   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.363455   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.363640   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.363659   24174 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 10:49:08.477195   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 10:49:08.477266   24174 main.go:141] libmachine: found compatible host: buildroot
	I0722 10:49:08.477277   24174 main.go:141] libmachine: Provisioning with buildroot...
	I0722 10:49:08.477291   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.477516   24174 buildroot.go:166] provisioning hostname "ha-461283-m03"
	I0722 10:49:08.477545   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.477754   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.480321   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.480780   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.480803   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.481023   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.481177   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.481306   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.481418   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.481557   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.481748   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.481762   24174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283-m03 && echo "ha-461283-m03" | sudo tee /etc/hostname
	I0722 10:49:08.606844   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283-m03
	
	I0722 10:49:08.606886   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.609767   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.610210   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.610239   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.610387   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.610594   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.610752   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.610913   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.611058   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.611216   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.611233   24174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:49:08.733722   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:49:08.733751   24174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:49:08.733790   24174 buildroot.go:174] setting up certificates
	I0722 10:49:08.733807   24174 provision.go:84] configureAuth start
	I0722 10:49:08.733826   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetMachineName
	I0722 10:49:08.734125   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:08.736480   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.736866   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.736892   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.737028   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.739129   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.739445   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.739470   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.739608   24174 provision.go:143] copyHostCerts
	I0722 10:49:08.739638   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:49:08.739666   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:49:08.739676   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:49:08.739738   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:49:08.739800   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:49:08.739817   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:49:08.739825   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:49:08.739852   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:49:08.739901   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:49:08.739917   24174 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:49:08.739923   24174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:49:08.739943   24174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:49:08.739988   24174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283-m03 san=[127.0.0.1 192.168.39.127 ha-461283-m03 localhost minikube]
	I0722 10:49:08.820848   24174 provision.go:177] copyRemoteCerts
	I0722 10:49:08.820914   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:49:08.820941   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.823287   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.823642   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.823667   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.823889   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.824029   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.824188   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.824355   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:08.910528   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:49:08.910598   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:49:08.935860   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:49:08.935931   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 10:49:08.961307   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:49:08.961369   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:49:08.985321   24174 provision.go:87] duration metric: took 251.497465ms to configureAuth
	I0722 10:49:08.985347   24174 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:49:08.985549   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:08.985628   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:08.988095   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.988340   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:08.988364   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:08.988597   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:08.988779   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.988937   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:08.989073   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:08.989195   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:08.989341   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:08.989360   24174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:49:09.280714   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:49:09.280741   24174 main.go:141] libmachine: Checking connection to Docker...
	I0722 10:49:09.280750   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetURL
	I0722 10:49:09.281926   24174 main.go:141] libmachine: (ha-461283-m03) DBG | Using libvirt version 6000000
	I0722 10:49:09.284425   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.284839   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.284889   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.285040   24174 main.go:141] libmachine: Docker is up and running!
	I0722 10:49:09.285052   24174 main.go:141] libmachine: Reticulating splines...
	I0722 10:49:09.285058   24174 client.go:171] duration metric: took 24.610441153s to LocalClient.Create
	I0722 10:49:09.285077   24174 start.go:167] duration metric: took 24.61049373s to libmachine.API.Create "ha-461283"
	I0722 10:49:09.285089   24174 start.go:293] postStartSetup for "ha-461283-m03" (driver="kvm2")
	I0722 10:49:09.285105   24174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:49:09.285124   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.285358   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:49:09.285386   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.287781   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.288195   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.288223   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.288361   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.288539   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.288690   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.288832   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.374634   24174 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:49:09.378831   24174 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:49:09.378853   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:49:09.378915   24174 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:49:09.378979   24174 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:49:09.378987   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:49:09.379068   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:49:09.389186   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:49:09.413193   24174 start.go:296] duration metric: took 128.08844ms for postStartSetup
	I0722 10:49:09.413234   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetConfigRaw
	I0722 10:49:09.413768   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:09.416467   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.416824   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.416852   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.417089   24174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:49:09.417279   24174 start.go:128] duration metric: took 24.760434681s to createHost
	I0722 10:49:09.417311   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.419757   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.420078   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.420105   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.420264   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.420458   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.420609   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.420749   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.420883   24174 main.go:141] libmachine: Using SSH client type: native
	I0722 10:49:09.421073   24174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0722 10:49:09.421084   24174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:49:09.528822   24174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645349.505034760
	
	I0722 10:49:09.528841   24174 fix.go:216] guest clock: 1721645349.505034760
	I0722 10:49:09.528848   24174 fix.go:229] Guest: 2024-07-22 10:49:09.50503476 +0000 UTC Remote: 2024-07-22 10:49:09.41729795 +0000 UTC m=+151.263842966 (delta=87.73681ms)
	I0722 10:49:09.528862   24174 fix.go:200] guest clock delta is within tolerance: 87.73681ms
	I0722 10:49:09.528872   24174 start.go:83] releasing machines lock for "ha-461283-m03", held for 24.872130242s
	I0722 10:49:09.528889   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.529167   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:09.531836   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.532231   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.532260   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.534306   24174 out.go:177] * Found network options:
	I0722 10:49:09.535565   24174 out.go:177]   - NO_PROXY=192.168.39.43,192.168.39.207
	W0722 10:49:09.536739   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 10:49:09.536762   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:49:09.536783   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537363   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537535   24174 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:49:09.537627   24174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:49:09.537664   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	W0722 10:49:09.537741   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 10:49:09.537761   24174 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 10:49:09.537821   24174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:49:09.537842   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:49:09.539945   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540294   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.540321   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540342   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540454   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.540630   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.540807   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.540813   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:09.540831   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:09.540935   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.541008   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:49:09.541134   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:49:09.541287   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:49:09.541426   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:49:09.782542   24174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:49:09.789559   24174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:49:09.789624   24174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:49:09.805342   24174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 10:49:09.805366   24174 start.go:495] detecting cgroup driver to use...
	I0722 10:49:09.805431   24174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:49:09.822372   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:49:09.835744   24174 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:49:09.835792   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:49:09.848940   24174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:49:09.862003   24174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:49:09.986348   24174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:49:10.155950   24174 docker.go:233] disabling docker service ...
	I0722 10:49:10.156006   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:49:10.170158   24174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:49:10.182854   24174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:49:10.296909   24174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:49:10.406158   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:49:10.420189   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:49:10.438116   24174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:49:10.438178   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.448415   24174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:49:10.448476   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.458871   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.469518   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.479701   24174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:49:10.490060   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.501689   24174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.518496   24174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:49:10.530601   24174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:49:10.541551   24174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 10:49:10.541608   24174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 10:49:10.556668   24174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:49:10.567356   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:10.700055   24174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:49:10.843840   24174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:49:10.843920   24174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:49:10.848752   24174 start.go:563] Will wait 60s for crictl version
	I0722 10:49:10.848801   24174 ssh_runner.go:195] Run: which crictl
	I0722 10:49:10.852600   24174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:49:10.892773   24174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:49:10.892864   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:49:10.921241   24174 ssh_runner.go:195] Run: crio --version
	I0722 10:49:10.950455   24174 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:49:10.951626   24174 out.go:177]   - env NO_PROXY=192.168.39.43
	I0722 10:49:10.952757   24174 out.go:177]   - env NO_PROXY=192.168.39.43,192.168.39.207
	I0722 10:49:10.954000   24174 main.go:141] libmachine: (ha-461283-m03) Calling .GetIP
	I0722 10:49:10.956328   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:10.956698   24174 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:49:10.956722   24174 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:49:10.956922   24174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:49:10.961914   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:49:10.974396   24174 mustload.go:65] Loading cluster: ha-461283
	I0722 10:49:10.974575   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:10.974811   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:10.974850   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:10.991013   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0722 10:49:10.991418   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:10.991902   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:10.991922   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:10.992224   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:10.992441   24174 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:49:10.993938   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:49:10.994219   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:10.994250   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:11.009575   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0722 10:49:11.009939   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:11.010337   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:11.010356   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:11.010651   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:11.010817   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:49:11.010962   24174 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.127
	I0722 10:49:11.010973   24174 certs.go:194] generating shared ca certs ...
	I0722 10:49:11.010991   24174 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.011122   24174 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:49:11.011167   24174 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:49:11.011176   24174 certs.go:256] generating profile certs ...
	I0722 10:49:11.011243   24174 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:49:11.011265   24174 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6
	I0722 10:49:11.011278   24174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.127 192.168.39.254]
	I0722 10:49:11.449858   24174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 ...
	I0722 10:49:11.449891   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6: {Name:mk1acccb6e32b46331a2aec037f91e925bb70c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.450071   24174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6 ...
	I0722 10:49:11.450087   24174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6: {Name:mkc815b51982cb420308edd988d909dd01ec0f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:49:11.450166   24174 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.f56168a6 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:49:11.450291   24174 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.f56168a6 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:49:11.450418   24174 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:49:11.450434   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:49:11.450447   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:49:11.450462   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:49:11.450477   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:49:11.450492   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:49:11.450506   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:49:11.450520   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:49:11.450534   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:49:11.450585   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:49:11.450615   24174 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:49:11.450625   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:49:11.450647   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:49:11.450671   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:49:11.450695   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:49:11.450735   24174 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:49:11.450762   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:49:11.450778   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:49:11.450792   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:11.450824   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:49:11.453996   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:11.454437   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:49:11.454465   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:11.454585   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:49:11.454768   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:49:11.454935   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:49:11.455098   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:49:11.528707   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 10:49:11.534017   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 10:49:11.544709   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 10:49:11.548890   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0722 10:49:11.559654   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 10:49:11.563732   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 10:49:11.574079   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 10:49:11.578279   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 10:49:11.590284   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 10:49:11.594962   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 10:49:11.606237   24174 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 10:49:11.610641   24174 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0722 10:49:11.624774   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:49:11.652394   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:49:11.678403   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:49:11.703983   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:49:11.729402   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0722 10:49:11.752843   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 10:49:11.776177   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:49:11.799762   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:49:11.823974   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:49:11.849282   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:49:11.871220   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:49:11.893411   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 10:49:11.911137   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0722 10:49:11.928736   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 10:49:11.945859   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 10:49:11.962202   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 10:49:11.978598   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0722 10:49:11.995906   24174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 10:49:12.012711   24174 ssh_runner.go:195] Run: openssl version
	I0722 10:49:12.018670   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:49:12.028738   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.032952   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.032997   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:49:12.038567   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:49:12.049963   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:49:12.061165   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.065930   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.065971   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:49:12.072079   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:49:12.082486   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:49:12.092554   24174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.096892   24174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.096935   24174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:49:12.102366   24174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:49:12.112504   24174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:49:12.116725   24174 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 10:49:12.116776   24174 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.30.3 crio true true} ...
	I0722 10:49:12.116845   24174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:49:12.116868   24174 kube-vip.go:115] generating kube-vip config ...
	I0722 10:49:12.116896   24174 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:49:12.132845   24174 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:49:12.132911   24174 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:49:12.132962   24174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:49:12.142555   24174 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 10:49:12.142595   24174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 10:49:12.152419   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 10:49:12.152444   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:49:12.152451   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0722 10:49:12.152475   24174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0722 10:49:12.152491   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:49:12.152496   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:49:12.152512   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 10:49:12.152558   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 10:49:12.158250   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 10:49:12.158277   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 10:49:12.194641   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 10:49:12.194664   24174 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:49:12.194682   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 10:49:12.194763   24174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 10:49:12.238431   24174 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 10:49:12.238469   24174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 10:49:13.052480   24174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 10:49:13.061695   24174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0722 10:49:13.078693   24174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:49:13.095911   24174 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:49:13.114238   24174 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:49:13.118705   24174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 10:49:13.131082   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:13.268944   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:49:13.285635   24174 host.go:66] Checking if "ha-461283" exists ...
	I0722 10:49:13.285981   24174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:49:13.286030   24174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:49:13.302166   24174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0722 10:49:13.302525   24174 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:49:13.302951   24174 main.go:141] libmachine: Using API Version  1
	I0722 10:49:13.302971   24174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:49:13.303328   24174 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:49:13.303498   24174 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:49:13.303641   24174 start.go:317] joinCluster: &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:49:13.303797   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 10:49:13.303817   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:49:13.306668   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:13.307257   24174 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:49:13.307279   24174 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:49:13.307436   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:49:13.307577   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:49:13.307744   24174 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:49:13.307913   24174 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:49:13.460830   24174 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:49:13.460879   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v2m5lg.582egtnlncp86dov --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0722 10:49:37.780469   24174 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v2m5lg.582egtnlncp86dov --discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-461283-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (24.319566133s)
	I0722 10:49:37.780510   24174 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 10:49:38.407486   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-461283-m03 minikube.k8s.io/updated_at=2024_07_22T10_49_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=ha-461283 minikube.k8s.io/primary=false
	I0722 10:49:38.528981   24174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-461283-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 10:49:38.644971   24174 start.go:319] duration metric: took 25.341327641s to joinCluster
	I0722 10:49:38.645043   24174 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 10:49:38.645355   24174 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:49:38.646239   24174 out.go:177] * Verifying Kubernetes components...
	I0722 10:49:38.647507   24174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:49:38.912498   24174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:49:38.974546   24174 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:49:38.974768   24174 kapi.go:59] client config for ha-461283: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key", CAFile:"/home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 10:49:38.974823   24174 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.43:8443
	I0722 10:49:38.975036   24174 node_ready.go:35] waiting up to 6m0s for node "ha-461283-m03" to be "Ready" ...
	I0722 10:49:38.975119   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:38.975128   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:38.975135   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:38.975138   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:38.978489   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:39.475235   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:39.475259   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:39.475272   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:39.475278   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:39.479578   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:39.976259   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:39.976282   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:39.976294   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:39.976302   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:39.979733   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:40.475184   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:40.475203   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:40.475211   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:40.475216   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:40.479258   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:40.975741   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:40.975763   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:40.975773   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:40.975779   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:40.979651   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:40.980436   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:41.475913   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:41.475937   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:41.475947   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:41.475954   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:41.480780   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:41.976164   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:41.976188   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:41.976198   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:41.976203   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:41.979341   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:42.475264   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:42.475300   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:42.475309   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:42.475312   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:42.478872   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:42.975873   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:42.975896   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:42.975904   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:42.975907   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:42.979944   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:43.475598   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:43.475621   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:43.475627   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:43.475632   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:43.479075   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:43.479748   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:43.975810   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:43.975831   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:43.975842   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:43.975850   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:43.979384   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:44.476088   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:44.476112   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:44.476123   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:44.476129   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:44.480188   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:44.975913   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:44.975933   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:44.975941   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:44.975945   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:44.979258   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:45.476118   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:45.476146   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:45.476155   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:45.476168   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:45.480099   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:45.480773   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:45.975573   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:45.975594   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:45.975603   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:45.975607   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:45.979283   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:46.475626   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:46.475657   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:46.475669   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:46.475673   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:46.480160   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:46.975996   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:46.976018   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:46.976026   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:46.976031   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:46.981084   24174 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 10:49:47.475268   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:47.475294   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:47.475306   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:47.475311   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:47.478707   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:47.975836   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:47.975856   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:47.975866   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:47.975871   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:47.979275   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:47.980112   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:48.475457   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:48.475477   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:48.475485   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:48.475493   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:48.479131   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:48.976305   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:48.976327   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:48.976337   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:48.976343   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:48.980020   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:49.475301   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:49.475325   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:49.475336   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:49.475343   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:49.479220   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:49.975275   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:49.975296   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:49.975304   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:49.975308   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:49.978767   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:50.475603   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:50.475628   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:50.475638   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:50.475642   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:50.478903   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:50.479595   24174 node_ready.go:53] node "ha-461283-m03" has status "Ready":"False"
	I0722 10:49:50.976185   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:50.976208   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:50.976218   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:50.976225   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:50.979573   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.475973   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:51.476000   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.476007   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.476013   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.479697   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.975307   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:51.975328   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.975336   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.975341   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.978674   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.979567   24174 node_ready.go:49] node "ha-461283-m03" has status "Ready":"True"
	I0722 10:49:51.979606   24174 node_ready.go:38] duration metric: took 13.004548385s for node "ha-461283-m03" to be "Ready" ...
	I0722 10:49:51.979617   24174 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:49:51.979693   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:51.979704   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.979714   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.979719   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.988241   24174 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 10:49:51.995547   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:51.995631   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qrfdd
	I0722 10:49:51.995639   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.995647   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:51.995653   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.998724   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:51.999389   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:51.999405   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:51.999412   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:51.999417   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.001964   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.002707   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.002733   24174 pod_ready.go:81] duration metric: took 7.158178ms for pod "coredns-7db6d8ff4d-qrfdd" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.002745   24174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.002815   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zb547
	I0722 10:49:52.002826   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.002834   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.002851   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.006824   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.008042   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.008060   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.008070   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.008078   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.011406   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.011980   24174 pod_ready.go:92] pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.011998   24174 pod_ready.go:81] duration metric: took 9.244763ms for pod "coredns-7db6d8ff4d-zb547" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.012009   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.012063   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283
	I0722 10:49:52.012072   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.012082   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.012087   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.015146   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.015766   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.015784   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.015794   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.015801   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.018603   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.019054   24174 pod_ready.go:92] pod "etcd-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.019070   24174 pod_ready.go:81] duration metric: took 7.053565ms for pod "etcd-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.019078   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.019122   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m02
	I0722 10:49:52.019130   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.019142   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.019146   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.022351   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.022888   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:52.022901   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.022908   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.022912   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.025786   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:52.026300   24174 pod_ready.go:92] pod "etcd-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.026320   24174 pod_ready.go:81] duration metric: took 7.235909ms for pod "etcd-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.026332   24174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.175726   24174 request.go:629] Waited for 149.300225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m03
	I0722 10:49:52.175783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-ha-461283-m03
	I0722 10:49:52.175789   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.175796   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.175803   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.179606   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.375378   24174 request.go:629] Waited for 195.273197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:52.375445   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:52.375451   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.375458   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.375464   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.378558   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.379370   24174 pod_ready.go:92] pod "etcd-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.379384   24174 pod_ready.go:81] duration metric: took 353.046152ms for pod "etcd-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.379400   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.575549   24174 request.go:629] Waited for 196.096059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:49:52.575635   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283
	I0722 10:49:52.575650   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.575657   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.575661   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.578951   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.776165   24174 request.go:629] Waited for 196.343974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.776257   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:52.776269   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.776280   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.776287   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.779509   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:52.780233   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:52.780254   24174 pod_ready.go:81] duration metric: took 400.846867ms for pod "kube-apiserver-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.780267   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:52.975277   24174 request.go:629] Waited for 194.944118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:49:52.975355   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m02
	I0722 10:49:52.975363   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:52.975371   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:52.975377   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:52.979405   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:53.175500   24174 request.go:629] Waited for 195.358341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:53.175581   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:53.175595   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.175606   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.175613   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.179810   24174 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 10:49:53.180530   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:53.180548   24174 pod_ready.go:81] duration metric: took 400.269537ms for pod "kube-apiserver-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:53.180557   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:53.376195   24174 request.go:629] Waited for 195.540352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.376255   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.376260   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.376268   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.376274   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.379484   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:53.575517   24174 request.go:629] Waited for 195.277322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.575578   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.575583   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.575589   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.575594   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.579103   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:53.775997   24174 request.go:629] Waited for 95.253357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.776050   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:53.776055   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.776063   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.776067   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.779071   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:53.976250   24174 request.go:629] Waited for 196.379747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.976315   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:53.976322   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:53.976333   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:53.976341   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:53.979786   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.181473   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-461283-m03
	I0722 10:49:54.181497   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.181507   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.181512   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.184611   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.375617   24174 request.go:629] Waited for 190.345543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:54.375704   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:54.375712   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.375720   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.375724   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.379330   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.380161   24174 pod_ready.go:92] pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:54.380180   24174 pod_ready.go:81] duration metric: took 1.199616581s for pod "kube-apiserver-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.380191   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.575609   24174 request.go:629] Waited for 195.343993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:49:54.575679   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283
	I0722 10:49:54.575685   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.575692   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.575697   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.579662   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.775880   24174 request.go:629] Waited for 195.319268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:54.775940   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:54.775947   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.775958   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.775965   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.779642   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:54.780628   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:54.780647   24174 pod_ready.go:81] duration metric: took 400.449567ms for pod "kube-controller-manager-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.780656   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:54.975688   24174 request.go:629] Waited for 194.945686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:49:54.975738   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m02
	I0722 10:49:54.975743   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:54.975749   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:54.975753   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:54.979037   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.175286   24174 request.go:629] Waited for 195.301108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:55.175342   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:55.175348   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.175356   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.175365   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.179116   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.179656   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.179673   24174 pod_ready.go:81] duration metric: took 399.011357ms for pod "kube-controller-manager-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.179687   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.375695   24174 request.go:629] Waited for 195.933455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m03
	I0722 10:49:55.375783   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283-m03
	I0722 10:49:55.375795   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.375807   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.375816   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.379578   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.575703   24174 request.go:629] Waited for 195.274723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:55.575758   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:55.575763   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.575770   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.575775   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.579123   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.579750   24174 pod_ready.go:92] pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.579769   24174 pod_ready.go:81] duration metric: took 400.074203ms for pod "kube-controller-manager-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.579778   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.775854   24174 request.go:629] Waited for 196.003639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:49:55.775926   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-28zxf
	I0722 10:49:55.775937   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.775949   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.775961   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.779658   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.975779   24174 request.go:629] Waited for 195.258311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:55.975842   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:55.975847   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:55.975855   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:55.975861   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:55.979165   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:55.979751   24174 pod_ready.go:92] pod "kube-proxy-28zxf" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:55.979771   24174 pod_ready.go:81] duration metric: took 399.987026ms for pod "kube-proxy-28zxf" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:55.979780   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.175411   24174 request.go:629] Waited for 195.565573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:49:56.175491   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkbsx
	I0722 10:49:56.175500   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.175507   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.175511   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.179143   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.375754   24174 request.go:629] Waited for 195.399438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:56.375817   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:56.375825   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.375835   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.375842   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.379571   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.380445   24174 pod_ready.go:92] pod "kube-proxy-xkbsx" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:56.380466   24174 pod_ready.go:81] duration metric: took 400.679442ms for pod "kube-proxy-xkbsx" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.380479   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zdbjw" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.575388   24174 request.go:629] Waited for 194.828894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zdbjw
	I0722 10:49:56.575440   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zdbjw
	I0722 10:49:56.575447   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.575455   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.575462   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.579016   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:56.776132   24174 request.go:629] Waited for 196.361583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:56.776214   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:56.776225   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.776236   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.776244   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.779256   24174 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 10:49:56.779921   24174 pod_ready.go:92] pod "kube-proxy-zdbjw" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:56.779941   24174 pod_ready.go:81] duration metric: took 399.455729ms for pod "kube-proxy-zdbjw" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.779958   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:56.975993   24174 request.go:629] Waited for 195.977344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:49:56.976047   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283
	I0722 10:49:56.976052   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:56.976061   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:56.976069   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:56.979391   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.175410   24174 request.go:629] Waited for 195.285956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:57.175470   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283
	I0722 10:49:57.175475   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.175483   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.175487   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.178950   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.179729   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.179746   24174 pod_ready.go:81] duration metric: took 399.780455ms for pod "kube-scheduler-ha-461283" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.179756   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.375848   24174 request.go:629] Waited for 196.035002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:49:57.375947   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m02
	I0722 10:49:57.375965   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.375991   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.376000   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.379397   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.575400   24174 request.go:629] Waited for 195.271015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:57.575465   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m02
	I0722 10:49:57.575470   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.575477   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.575482   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.579006   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.579926   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.579944   24174 pod_ready.go:81] duration metric: took 400.18132ms for pod "kube-scheduler-ha-461283-m02" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.579956   24174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.776045   24174 request.go:629] Waited for 196.01819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m03
	I0722 10:49:57.776114   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-461283-m03
	I0722 10:49:57.776122   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.776132   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.776141   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.779891   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.976077   24174 request.go:629] Waited for 195.361716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:57.976142   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/ha-461283-m03
	I0722 10:49:57.976151   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:57.976162   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:57.976172   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:57.979683   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:57.980456   24174 pod_ready.go:92] pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 10:49:57.980475   24174 pod_ready.go:81] duration metric: took 400.51165ms for pod "kube-scheduler-ha-461283-m03" in "kube-system" namespace to be "Ready" ...
	I0722 10:49:57.980486   24174 pod_ready.go:38] duration metric: took 6.00085144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 10:49:57.980499   24174 api_server.go:52] waiting for apiserver process to appear ...
	I0722 10:49:57.980547   24174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 10:49:57.998327   24174 api_server.go:72] duration metric: took 19.353247057s to wait for apiserver process to appear ...
	I0722 10:49:57.998350   24174 api_server.go:88] waiting for apiserver healthz status ...
	I0722 10:49:57.998367   24174 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0722 10:49:58.005000   24174 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0722 10:49:58.005073   24174 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I0722 10:49:58.005085   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.005094   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.005100   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.005968   24174 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 10:49:58.006029   24174 api_server.go:141] control plane version: v1.30.3
	I0722 10:49:58.006044   24174 api_server.go:131] duration metric: took 7.687976ms to wait for apiserver health ...
	I0722 10:49:58.006053   24174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 10:49:58.175855   24174 request.go:629] Waited for 169.718373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.175899   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.175904   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.175916   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.175922   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.182153   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:49:58.191155   24174 system_pods.go:59] 24 kube-system pods found
	I0722 10:49:58.191185   24174 system_pods.go:61] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:49:58.191191   24174 system_pods.go:61] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:49:58.191197   24174 system_pods.go:61] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:49:58.191201   24174 system_pods.go:61] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:49:58.191205   24174 system_pods.go:61] "etcd-ha-461283-m03" [4e5fe31e-0b87-4ab1-8344-d6c7f7f4beb8] Running
	I0722 10:49:58.191209   24174 system_pods.go:61] "kindnet-9m2ms" [9b540ee3-5d01-422c-85e7-b5a5b7e2bcba] Running
	I0722 10:49:58.191214   24174 system_pods.go:61] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:49:58.191218   24174 system_pods.go:61] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:49:58.191223   24174 system_pods.go:61] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:49:58.191228   24174 system_pods.go:61] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:49:58.191236   24174 system_pods.go:61] "kube-apiserver-ha-461283-m03" [e0fd45ad-15f4-486f-a67d-c9e281f5b088] Running
	I0722 10:49:58.191242   24174 system_pods.go:61] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:49:58.191250   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:49:58.191255   24174 system_pods.go:61] "kube-controller-manager-ha-461283-m03" [e5388816-2cb2-42eb-a732-fda7f45f77ea] Running
	I0722 10:49:58.191263   24174 system_pods.go:61] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:49:58.191268   24174 system_pods.go:61] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:49:58.191276   24174 system_pods.go:61] "kube-proxy-zdbjw" [f60a30fe-aa02-4f0c-ab22-c8c26a02d5e3] Running
	I0722 10:49:58.191282   24174 system_pods.go:61] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:49:58.191289   24174 system_pods.go:61] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:49:58.191324   24174 system_pods.go:61] "kube-scheduler-ha-461283-m03" [1ef00867-aff1-4ace-8608-446fe7a89777] Running
	I0722 10:49:58.191336   24174 system_pods.go:61] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:49:58.191342   24174 system_pods.go:61] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:49:58.191347   24174 system_pods.go:61] "kube-vip-ha-461283-m03" [1a8e6ea4-4cbb-4adb-bb70-63be44cbd682] Running
	I0722 10:49:58.191354   24174 system_pods.go:61] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:49:58.191362   24174 system_pods.go:74] duration metric: took 185.300855ms to wait for pod list to return data ...
	I0722 10:49:58.191374   24174 default_sa.go:34] waiting for default service account to be created ...
	I0722 10:49:58.375870   24174 request.go:629] Waited for 184.421682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:49:58.375924   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I0722 10:49:58.375929   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.375937   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.375942   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.379010   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:58.379135   24174 default_sa.go:45] found service account: "default"
	I0722 10:49:58.379150   24174 default_sa.go:55] duration metric: took 187.76681ms for default service account to be created ...
	I0722 10:49:58.379158   24174 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 10:49:58.575488   24174 request.go:629] Waited for 196.270322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.575554   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I0722 10:49:58.575561   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.575571   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.575575   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.581970   24174 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 10:49:58.588869   24174 system_pods.go:86] 24 kube-system pods found
	I0722 10:49:58.588894   24174 system_pods.go:89] "coredns-7db6d8ff4d-qrfdd" [f1c9698a-e97d-4b8a-ab71-f19003b5dcfd] Running
	I0722 10:49:58.588900   24174 system_pods.go:89] "coredns-7db6d8ff4d-zb547" [54886641-9710-4355-86ff-016ad48b5cd5] Running
	I0722 10:49:58.588904   24174 system_pods.go:89] "etcd-ha-461283" [842e06f5-5c51-4cd9-b6ab-b3a8cbc9e23b] Running
	I0722 10:49:58.588908   24174 system_pods.go:89] "etcd-ha-461283-m02" [832101e1-09b9-4b1c-a39b-77c46725a280] Running
	I0722 10:49:58.588912   24174 system_pods.go:89] "etcd-ha-461283-m03" [4e5fe31e-0b87-4ab1-8344-d6c7f7f4beb8] Running
	I0722 10:49:58.588916   24174 system_pods.go:89] "kindnet-9m2ms" [9b540ee3-5d01-422c-85e7-b5a5b7e2bcba] Running
	I0722 10:49:58.588920   24174 system_pods.go:89] "kindnet-hmrqh" [abe55aff-7926-481f-90cd-3cc209d79f63] Running
	I0722 10:49:58.588925   24174 system_pods.go:89] "kindnet-qsphb" [6b302f3f-51ae-4492-8ac3-470e7739ad08] Running
	I0722 10:49:58.588932   24174 system_pods.go:89] "kube-apiserver-ha-461283" [ca55ae7f-0148-4802-b9cb-424453f13992] Running
	I0722 10:49:58.588938   24174 system_pods.go:89] "kube-apiserver-ha-461283-m02" [d19287ef-f418-4ec5-bb43-e42dd94562ea] Running
	I0722 10:49:58.588945   24174 system_pods.go:89] "kube-apiserver-ha-461283-m03" [e0fd45ad-15f4-486f-a67d-c9e281f5b088] Running
	I0722 10:49:58.588952   24174 system_pods.go:89] "kube-controller-manager-ha-461283" [3adf0e38-7eb7-4945-9059-5371718a8d92] Running
	I0722 10:49:58.588962   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m02" [d1cebc09-9543-4d78-a1b9-785e4c489814] Running
	I0722 10:49:58.588967   24174 system_pods.go:89] "kube-controller-manager-ha-461283-m03" [e5388816-2cb2-42eb-a732-fda7f45f77ea] Running
	I0722 10:49:58.588971   24174 system_pods.go:89] "kube-proxy-28zxf" [5894062f-0d05-45f4-88eb-da134f234e2d] Running
	I0722 10:49:58.588975   24174 system_pods.go:89] "kube-proxy-xkbsx" [9d137555-9952-418f-bbfb-2159a48bbfcc] Running
	I0722 10:49:58.588980   24174 system_pods.go:89] "kube-proxy-zdbjw" [f60a30fe-aa02-4f0c-ab22-c8c26a02d5e3] Running
	I0722 10:49:58.588984   24174 system_pods.go:89] "kube-scheduler-ha-461283" [3c18099b-16d8-4214-92c8-b583323bed9b] Running
	I0722 10:49:58.588988   24174 system_pods.go:89] "kube-scheduler-ha-461283-m02" [bdffe858-ca6b-4f8c-951a-e08115dff406] Running
	I0722 10:49:58.588993   24174 system_pods.go:89] "kube-scheduler-ha-461283-m03" [1ef00867-aff1-4ace-8608-446fe7a89777] Running
	I0722 10:49:58.588997   24174 system_pods.go:89] "kube-vip-ha-461283" [244dde01-94fe-46c1-82f2-92ca2624750e] Running
	I0722 10:49:58.589002   24174 system_pods.go:89] "kube-vip-ha-461283-m02" [a74a9071-1b29-4c1a-abc4-b57a7499e3d8] Running
	I0722 10:49:58.589005   24174 system_pods.go:89] "kube-vip-ha-461283-m03" [1a8e6ea4-4cbb-4adb-bb70-63be44cbd682] Running
	I0722 10:49:58.589008   24174 system_pods.go:89] "storage-provisioner" [a336a57b-330a-4251-8e33-2b277593a565] Running
	I0722 10:49:58.589015   24174 system_pods.go:126] duration metric: took 209.849845ms to wait for k8s-apps to be running ...
	I0722 10:49:58.589021   24174 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 10:49:58.589071   24174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 10:49:58.605159   24174 system_svc.go:56] duration metric: took 16.128323ms WaitForService to wait for kubelet
	I0722 10:49:58.605185   24174 kubeadm.go:582] duration metric: took 19.960108237s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:49:58.605208   24174 node_conditions.go:102] verifying NodePressure condition ...
	I0722 10:49:58.775691   24174 request.go:629] Waited for 170.39407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I0722 10:49:58.775750   24174 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I0722 10:49:58.775758   24174 round_trippers.go:469] Request Headers:
	I0722 10:49:58.775768   24174 round_trippers.go:473]     Accept: application/json, */*
	I0722 10:49:58.775777   24174 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0722 10:49:58.779067   24174 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 10:49:58.780404   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780428   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780443   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780448   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780454   24174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 10:49:58.780458   24174 node_conditions.go:123] node cpu capacity is 2
	I0722 10:49:58.780464   24174 node_conditions.go:105] duration metric: took 175.248519ms to run NodePressure ...
	I0722 10:49:58.780480   24174 start.go:241] waiting for startup goroutines ...
	I0722 10:49:58.780508   24174 start.go:255] writing updated cluster config ...
	I0722 10:49:58.780987   24174 ssh_runner.go:195] Run: rm -f paused
	I0722 10:49:58.833901   24174 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 10:49:58.835660   24174 out.go:177] * Done! kubectl is now configured to use "ha-461283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.567864252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645680567841246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f3c6107-1280-49de-a04f-737eb5d05996 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.568485991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3526ebb-fb16-450a-838e-36f6856dcd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.568558859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3526ebb-fb16-450a-838e-36f6856dcd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.568885651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3526ebb-fb16-450a-838e-36f6856dcd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.609248703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00f6ddb8-ef3b-43c2-920d-756e03406fdd name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.609317345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00f6ddb8-ef3b-43c2-920d-756e03406fdd name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.610571937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3943ea8c-a361-4321-8b92-37c5531e3d84 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.611185629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645680611162118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3943ea8c-a361-4321-8b92-37c5531e3d84 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.611573315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7070c3e0-3682-4f30-863f-5c7dd2aebdf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.611640075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7070c3e0-3682-4f30-863f-5c7dd2aebdf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.611925003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7070c3e0-3682-4f30-863f-5c7dd2aebdf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.648313785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55f99815-dce2-490b-a6c6-ebf110786817 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.648397133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55f99815-dce2-490b-a6c6-ebf110786817 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.649495803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c97fa6f-657c-4bc1-9daf-3efd6fdb1052 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.649989089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645680649967992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c97fa6f-657c-4bc1-9daf-3efd6fdb1052 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.650460960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=040cd7e2-4a93-48c3-b04c-34fe4bff4689 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.650521624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=040cd7e2-4a93-48c3-b04c-34fe4bff4689 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.650740969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=040cd7e2-4a93-48c3-b04c-34fe4bff4689 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.692025182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ff2c62e-ac5e-429f-81d3-5c79de0a89e5 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.692096769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ff2c62e-ac5e-429f-81d3-5c79de0a89e5 name=/runtime.v1.RuntimeService/Version
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.692962205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acd5a6db-a1fa-4486-85b2-37f43506624b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.693369982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721645680693350370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acd5a6db-a1fa-4486-85b2-37f43506624b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.693841662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=219d61e3-412b-4378-906d-ad235685cee4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.693891739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=219d61e3-412b-4378-906d-ad235685cee4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 10:54:40 ha-461283 crio[683]: time="2024-07-22 10:54:40.694129258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645402570912123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f9af1e9784e28d6f1a3d8907ed95d52086de262ed11e8309757b8a7f3db29b,PodSandboxId:df4c3d24ea139dbcc5ab94af0cf2be59201940f504340e6dc500c086e01fbfad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645264413847237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264374064373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645264350379429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e9
7d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721645252505350541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172164525
0607547457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6533d7c334e7fda51727c6185c7fa171d3b1c652ce4d368c5d71df0f7feef49,PodSandboxId:97b8ec6ae1c31219c39f0e98c49a73f9bb5ffd0968b64a7215c5c3efc5ef5588,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17216452323
82303288,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ded6380659b2f4b7af2dd651372121bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645230463101574,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d,PodSandboxId:ca3273ac397ead0e26c8356d955855dfe5575fc6c9a09e985060b34c33557ff5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645230355376992,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645230331559113,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e,PodSandboxId:5d28c62eff243ce10766503792f28f0bd03da2ca60c8245c2143c481a83362f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645230334934890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=219d61e3-412b-4378-906d-ad235685cee4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e0d7d39c32b2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   816fd2e7cd706       busybox-fc5497c4f-hkw9v
	19f9af1e9784e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   df4c3d24ea139       storage-provisioner
	5920882be1f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   4723f41d773ba       coredns-7db6d8ff4d-zb547
	797ae9e61fe18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0c2ec5e338fb3       coredns-7db6d8ff4d-qrfdd
	165b67d20aa98       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   e171bdcb5b84c       kindnet-hmrqh
	8ad5ed56ce259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   ffbce6c0af4bc       kube-proxy-28zxf
	b6533d7c334e7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   97b8ec6ae1c31       kube-vip-ha-461283
	70a36c3082983       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   54a1041d8e184       kube-scheduler-ha-461283
	08c8bf4f5df71       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   ca3273ac397ea       kube-controller-manager-ha-461283
	9ce5e449cc185       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   5d28c62eff243       kube-apiserver-ha-461283
	dc7da6bdaabcb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   e5abe1a443195       etcd-ha-461283
	
	
	==> coredns [5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43426 - 1850 "HINFO IN 2832132329847409715.878106688873651055. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010179034s
	[INFO] 10.244.2.2:34562 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01861668s
	[INFO] 10.244.1.2:53270 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.028944038s
	[INFO] 10.244.1.2:49060 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000094138s
	[INFO] 10.244.0.4:58821 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000212894s
	[INFO] 10.244.0.4:36629 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118072s
	[INFO] 10.244.0.4:39713 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00173787s
	[INFO] 10.244.2.2:34877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249226s
	[INFO] 10.244.2.2:47321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169139s
	[INFO] 10.244.2.2:37812 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009086884s
	[INFO] 10.244.2.2:48940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000477846s
	[INFO] 10.244.0.4:59919 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067175s
	[INFO] 10.244.2.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116023s
	[INFO] 10.244.2.2:46340 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079971s
	[INFO] 10.244.1.2:40840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133586s
	[INFO] 10.244.1.2:47315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158975s
	[INFO] 10.244.1.2:41268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093188s
	[INFO] 10.244.2.2:49311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014354s
	[INFO] 10.244.2.2:35152 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214208s
	[INFO] 10.244.1.2:60324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129417s
	[INFO] 10.244.1.2:58260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228807s
	[INFO] 10.244.1.2:39894 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113717s
	[INFO] 10.244.0.4:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152128s
	[INFO] 10.244.0.4:39699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074743s
	
	
	==> coredns [797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a] <==
	[INFO] 10.244.1.2:54694 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150701s
	[INFO] 10.244.1.2:34456 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147767s
	[INFO] 10.244.1.2:44962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367912s
	[INFO] 10.244.1.2:54147 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063996s
	[INFO] 10.244.1.2:60170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000998s
	[INFO] 10.244.1.2:50008 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060128s
	[INFO] 10.244.0.4:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828391s
	[INFO] 10.244.0.4:43357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054533s
	[INFO] 10.244.0.4:60216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000029938s
	[INFO] 10.244.0.4:48124 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001149366s
	[INFO] 10.244.0.4:34363 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035155s
	[INFO] 10.244.0.4:44217 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049654s
	[INFO] 10.244.0.4:35448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035288s
	[INFO] 10.244.2.2:42369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105863s
	[INFO] 10.244.2.2:51781 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069936s
	[INFO] 10.244.1.2:47904 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103521s
	[INFO] 10.244.0.4:49081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120239s
	[INFO] 10.244.0.4:40762 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121632s
	[INFO] 10.244.0.4:59110 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066206s
	[INFO] 10.244.0.4:39650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092772s
	[INFO] 10.244.2.2:51074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000265828s
	[INFO] 10.244.2.2:58192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130056s
	[INFO] 10.244.1.2:54053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255068s
	[INFO] 10.244.0.4:50225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074972s
	[INFO] 10.244.0.4:44950 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080101s
	
	
	==> describe nodes <==
	Name:               ha-461283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:54:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:50:20 +0000   Mon, 22 Jul 2024 10:47:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-461283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7adceecddbb41f7a81e4df2b7433c7b
	  System UUID:                f7adceec-ddbb-41f7-a81e-4df2b7433c7b
	  Boot ID:                    16bdd5e7-d27f-4ce8-a232-7bbe4c4337c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkw9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 coredns-7db6d8ff4d-qrfdd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m11s
	  kube-system                 coredns-7db6d8ff4d-zb547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m11s
	  kube-system                 etcd-ha-461283                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m24s
	  kube-system                 kindnet-hmrqh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-461283             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-controller-manager-ha-461283    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-proxy-28zxf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-461283             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-vip-ha-461283                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s  kubelet          Node ha-461283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s  kubelet          Node ha-461283 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s  kubelet          Node ha-461283 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal  NodeReady                6m57s  kubelet          Node ha-461283 status is now: NodeReady
	  Normal  RegisteredNode           6m     node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	
	
	Name:               ha-461283-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:51:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 10:50:24 +0000   Mon, 22 Jul 2024 10:51:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    ha-461283-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 164987e6e4bd4513b51bbf58f6e5b85b
	  System UUID:                164987e6-e4bd-4513-b51b-bf58f6e5b85b
	  Boot ID:                    e26a498d-a0e2-4cf4-8724-f393c49d215f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cgtcl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 etcd-ha-461283-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-qsphb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-461283-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-461283-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-xkbsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-461283-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-461283-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           6m1s                   node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  NodeNotReady             2m54s                  node-controller  Node ha-461283-m02 status is now: NodeNotReady
	
	
	Name:               ha-461283-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_49_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:49:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:54:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:50:04 +0000   Mon, 22 Jul 2024 10:49:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-461283-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 daecc7f26d194772811b43378358ae92
	  System UUID:                daecc7f2-6d19-4772-811b-43378358ae92
	  Boot ID:                    d7ec2b29-5844-4c1f-be17-9ba20de6b894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bf5vn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 etcd-ha-461283-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-9m2ms                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-ha-461283-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-461283-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-zdbjw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-ha-461283-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-461283-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-461283-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal  RegisteredNode           4m49s                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	
	
	Name:               ha-461283-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_50_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:50:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:51:07 +0000   Mon, 22 Jul 2024 10:50:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-461283-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bf2f0ce1a340479f7577f27f1f3419
	  System UUID:                02bf2f0c-e1a3-4047-9f75-77f27f1f3419
	  Boot ID:                    872589a4-4f7b-4349-a791-7c244df230df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8h8rp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m5s
	  kube-system                 kube-proxy-q6mgq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x2 over 4m5s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x2 over 4m5s)  kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x2 over 4m5s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal  NodeReady                3m47s                kubelet          Node ha-461283-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul22 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049866] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038978] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.505156] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.146448] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.618407] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.217704] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.054835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059084] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.188930] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Jul22 10:47] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.257396] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.205609] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +3.948218] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.066710] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.986663] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.075913] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.885402] kauditd_printk_skb: 18 callbacks suppressed
	[ +22.062510] kauditd_printk_skb: 38 callbacks suppressed
	[Jul22 10:48] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08] <==
	{"level":"warn","ts":"2024-07-22T10:54:40.953675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.969456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.977902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.984386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.987684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.990602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:40.998826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.00435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.004957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.010507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.013978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.017373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.031005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.032717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.033847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.036248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.039186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.044631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.048269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.050745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.052144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.05647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.063036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.068439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:54:41.104201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:54:41 up 7 min,  0 users,  load average: 0.17, 0.30, 0.18
	Linux ha-461283 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb] <==
	I0722 10:54:03.637917       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:54:13.645734       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:54:13.645825       1 main.go:299] handling current node
	I0722 10:54:13.645844       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:54:13.645850       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:13.646000       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:54:13.646025       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:13.646090       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:54:13.646111       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:54:23.644919       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:54:23.645016       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:23.645189       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:54:23.645212       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:23.645279       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:54:23.645297       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:54:23.645372       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:54:23.645391       1 main.go:299] handling current node
	I0722 10:54:33.636979       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:54:33.637024       1 main.go:299] handling current node
	I0722 10:54:33.637039       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:54:33.637060       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:33.637251       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:54:33.637275       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:33.637353       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:54:33.637372       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9ce5e449cc185968f3ceb60a6b397366e0ce5da8bed2aaf99f71b156613df39e] <==
	I0722 10:47:15.370718       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0722 10:47:15.378465       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.43]
	I0722 10:47:15.379695       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:47:15.385191       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:47:15.606948       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 10:47:16.584921       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 10:47:16.621767       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 10:47:16.635612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 10:47:29.719176       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0722 10:47:29.970504       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0722 10:50:03.848951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37786: use of closed network connection
	E0722 10:50:04.062242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37796: use of closed network connection
	E0722 10:50:04.266766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37818: use of closed network connection
	E0722 10:50:04.450098       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37830: use of closed network connection
	E0722 10:50:04.626203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37842: use of closed network connection
	E0722 10:50:04.807649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37862: use of closed network connection
	E0722 10:50:04.984137       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37874: use of closed network connection
	E0722 10:50:05.168704       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37894: use of closed network connection
	E0722 10:50:05.454231       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37920: use of closed network connection
	E0722 10:50:05.638627       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37934: use of closed network connection
	E0722 10:50:05.830620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37954: use of closed network connection
	E0722 10:50:05.993232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37970: use of closed network connection
	E0722 10:50:06.167765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37988: use of closed network connection
	E0722 10:50:06.331279       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37996: use of closed network connection
	W0722 10:51:25.386272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.43]
	
	
	==> kube-controller-manager [08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d] <==
	I0722 10:48:24.764445       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m02"
	I0722 10:49:33.858754       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-461283-m03\" does not exist"
	I0722 10:49:33.900100       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-461283-m03" podCIDRs=["10.244.2.0/24"]
	I0722 10:49:34.792168       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m03"
	I0722 10:49:59.736360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.282034ms"
	I0722 10:49:59.772290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.752381ms"
	I0722 10:49:59.775843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="343.44µs"
	I0722 10:49:59.778307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.883µs"
	I0722 10:49:59.902309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.263007ms"
	I0722 10:50:00.080226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.785274ms"
	I0722 10:50:00.102300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.713011ms"
	I0722 10:50:00.102508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.548µs"
	I0722 10:50:01.476259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.793µs"
	I0722 10:50:01.909019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.104631ms"
	I0722 10:50:01.909184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.29µs"
	I0722 10:50:02.292909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.197887ms"
	I0722 10:50:02.293065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.563µs"
	I0722 10:50:03.173507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.06045ms"
	I0722 10:50:03.174151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.901µs"
	I0722 10:50:36.409862       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-461283-m04\" does not exist"
	I0722 10:50:39.824309       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m04"
	I0722 10:50:54.481624       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-461283-m04"
	I0722 10:51:47.850513       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-461283-m04"
	I0722 10:51:47.952917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.983771ms"
	I0722 10:51:47.954751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.745µs"
	
	
	==> kube-proxy [8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44] <==
	I0722 10:47:30.927235       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:47:30.946260       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.43"]
	I0722 10:47:31.023909       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:47:31.023974       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:47:31.023996       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:47:31.033400       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:47:31.033901       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:47:31.034458       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:47:31.037219       1 config.go:192] "Starting service config controller"
	I0722 10:47:31.037422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:47:31.037494       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:47:31.037514       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:47:31.038605       1 config.go:319] "Starting node config controller"
	I0722 10:47:31.039656       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:47:31.137922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:47:31.138129       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:47:31.139959       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240] <==
	E0722 10:47:14.588060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:14.619707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:14.619822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:14.677040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:47:14.677088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:47:14.694693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:47:14.694841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:47:14.749285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:14.749332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:15.011695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:47:15.011746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:47:15.029154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 10:47:15.029252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0722 10:47:16.334995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 10:49:59.728583       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cgtcl\": pod busybox-fc5497c4f-cgtcl is already assigned to node \"ha-461283-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cgtcl" node="ha-461283-m02"
	E0722 10:49:59.729634       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cb9376f3-a8a3-4f85-a044-d0aa447ca494(default/busybox-fc5497c4f-cgtcl) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-cgtcl"
	E0722 10:49:59.729669       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cgtcl\": pod busybox-fc5497c4f-cgtcl is already assigned to node \"ha-461283-m02\"" pod="default/busybox-fc5497c4f-cgtcl"
	I0722 10:49:59.729715       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-cgtcl" node="ha-461283-m02"
	E0722 10:49:59.736195       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkw9v\": pod busybox-fc5497c4f-hkw9v is already assigned to node \"ha-461283\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-hkw9v" node="ha-461283"
	E0722 10:49:59.736638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 264707a6-61a4-4941-b996-0bebde73d4c7(default/busybox-fc5497c4f-hkw9v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-hkw9v"
	E0722 10:49:59.736744       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkw9v\": pod busybox-fc5497c4f-hkw9v is already assigned to node \"ha-461283\"" pod="default/busybox-fc5497c4f-hkw9v"
	I0722 10:49:59.736843       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-hkw9v" node="ha-461283"
	E0722 10:50:36.492116       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8h8rp\": pod kindnet-8h8rp is already assigned to node \"ha-461283-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8h8rp" node="ha-461283-m04"
	E0722 10:50:36.493842       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8h8rp\": pod kindnet-8h8rp is already assigned to node \"ha-461283-m04\"" pod="kube-system/kindnet-8h8rp"
	I0722 10:50:36.493969       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8h8rp" node="ha-461283-m04"
	
	
	==> kubelet <==
	Jul 22 10:50:16 ha-461283 kubelet[1372]: E0722 10:50:16.529544    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:50:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:50:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:51:16 ha-461283 kubelet[1372]: E0722 10:51:16.532151    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:51:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:51:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:52:16 ha-461283 kubelet[1372]: E0722 10:52:16.529087    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:52:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:52:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:53:16 ha-461283 kubelet[1372]: E0722 10:53:16.534219    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:53:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:53:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:54:16 ha-461283 kubelet[1372]: E0722 10:54:16.530448    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:54:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:54:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:54:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:54:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-461283 -n ha-461283
helpers_test.go:261: (dbg) Run:  kubectl --context ha-461283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (62.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-461283 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-461283 -v=7 --alsologtostderr
E0722 10:56:36.611404   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-461283 -v=7 --alsologtostderr: exit status 82 (2m1.81203033s)

                                                
                                                
-- stdout --
	* Stopping node "ha-461283-m04"  ...
	* Stopping node "ha-461283-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:54:42.482535   30107 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:54:42.482646   30107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:42.482655   30107 out.go:304] Setting ErrFile to fd 2...
	I0722 10:54:42.482660   30107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:54:42.482826   30107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:54:42.483033   30107 out.go:298] Setting JSON to false
	I0722 10:54:42.483122   30107 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:42.483438   30107 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:42.483514   30107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:54:42.483679   30107 mustload.go:65] Loading cluster: ha-461283
	I0722 10:54:42.483797   30107 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:54:42.483831   30107 stop.go:39] StopHost: ha-461283-m04
	I0722 10:54:42.484225   30107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:42.484270   30107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:42.498352   30107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0722 10:54:42.498751   30107 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:42.499299   30107 main.go:141] libmachine: Using API Version  1
	I0722 10:54:42.499329   30107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:42.499629   30107 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:42.501858   30107 out.go:177] * Stopping node "ha-461283-m04"  ...
	I0722 10:54:42.502895   30107 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 10:54:42.502924   30107 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 10:54:42.503103   30107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 10:54:42.503126   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 10:54:42.505995   30107 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:42.506404   30107 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:50:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 10:54:42.506435   30107 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 10:54:42.506535   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 10:54:42.506701   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 10:54:42.506890   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 10:54:42.507048   30107 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 10:54:42.595830   30107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 10:54:42.649274   30107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 10:54:42.704357   30107 main.go:141] libmachine: Stopping "ha-461283-m04"...
	I0722 10:54:42.704414   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:42.705860   30107 main.go:141] libmachine: (ha-461283-m04) Calling .Stop
	I0722 10:54:42.709268   30107 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 0/120
	I0722 10:54:43.844177   30107 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 10:54:43.845560   30107 main.go:141] libmachine: Machine "ha-461283-m04" was stopped.
	I0722 10:54:43.845576   30107 stop.go:75] duration metric: took 1.342682503s to stop
	I0722 10:54:43.845598   30107 stop.go:39] StopHost: ha-461283-m03
	I0722 10:54:43.845939   30107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:54:43.846000   30107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:54:43.860552   30107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43887
	I0722 10:54:43.860916   30107 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:54:43.861347   30107 main.go:141] libmachine: Using API Version  1
	I0722 10:54:43.861366   30107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:54:43.861685   30107 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:54:43.863424   30107 out.go:177] * Stopping node "ha-461283-m03"  ...
	I0722 10:54:43.864594   30107 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 10:54:43.864613   30107 main.go:141] libmachine: (ha-461283-m03) Calling .DriverName
	I0722 10:54:43.864831   30107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 10:54:43.864852   30107 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHHostname
	I0722 10:54:43.867249   30107 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:43.867665   30107 main.go:141] libmachine: (ha-461283-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:8f:df", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:48:59 +0000 UTC Type:0 Mac:52:54:00:03:8f:df Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-461283-m03 Clientid:01:52:54:00:03:8f:df}
	I0722 10:54:43.867692   30107 main.go:141] libmachine: (ha-461283-m03) DBG | domain ha-461283-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:03:8f:df in network mk-ha-461283
	I0722 10:54:43.867878   30107 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHPort
	I0722 10:54:43.868051   30107 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHKeyPath
	I0722 10:54:43.868171   30107 main.go:141] libmachine: (ha-461283-m03) Calling .GetSSHUsername
	I0722 10:54:43.868276   30107 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m03/id_rsa Username:docker}
	I0722 10:54:43.955280   30107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 10:54:44.008542   30107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 10:54:44.062448   30107 main.go:141] libmachine: Stopping "ha-461283-m03"...
	I0722 10:54:44.062476   30107 main.go:141] libmachine: (ha-461283-m03) Calling .GetState
	I0722 10:54:44.064066   30107 main.go:141] libmachine: (ha-461283-m03) Calling .Stop
	I0722 10:54:44.067700   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 0/120
	I0722 10:54:45.068838   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 1/120
	I0722 10:54:46.070744   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 2/120
	I0722 10:54:47.072051   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 3/120
	I0722 10:54:48.073406   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 4/120
	I0722 10:54:49.075245   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 5/120
	I0722 10:54:50.076867   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 6/120
	I0722 10:54:51.078353   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 7/120
	I0722 10:54:52.079979   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 8/120
	I0722 10:54:53.081282   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 9/120
	I0722 10:54:54.083615   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 10/120
	I0722 10:54:55.084956   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 11/120
	I0722 10:54:56.086409   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 12/120
	I0722 10:54:57.087673   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 13/120
	I0722 10:54:58.089444   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 14/120
	I0722 10:54:59.091271   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 15/120
	I0722 10:55:00.092778   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 16/120
	I0722 10:55:01.093945   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 17/120
	I0722 10:55:02.095416   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 18/120
	I0722 10:55:03.096835   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 19/120
	I0722 10:55:04.098634   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 20/120
	I0722 10:55:05.099945   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 21/120
	I0722 10:55:06.101566   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 22/120
	I0722 10:55:07.102857   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 23/120
	I0722 10:55:08.104690   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 24/120
	I0722 10:55:09.106546   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 25/120
	I0722 10:55:10.107957   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 26/120
	I0722 10:55:11.109273   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 27/120
	I0722 10:55:12.110530   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 28/120
	I0722 10:55:13.111853   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 29/120
	I0722 10:55:14.113671   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 30/120
	I0722 10:55:15.115389   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 31/120
	I0722 10:55:16.116918   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 32/120
	I0722 10:55:17.118213   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 33/120
	I0722 10:55:18.119423   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 34/120
	I0722 10:55:19.121124   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 35/120
	I0722 10:55:20.122911   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 36/120
	I0722 10:55:21.124218   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 37/120
	I0722 10:55:22.125491   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 38/120
	I0722 10:55:23.127203   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 39/120
	I0722 10:55:24.129363   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 40/120
	I0722 10:55:25.130458   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 41/120
	I0722 10:55:26.131598   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 42/120
	I0722 10:55:27.132879   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 43/120
	I0722 10:55:28.134044   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 44/120
	I0722 10:55:29.135585   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 45/120
	I0722 10:55:30.136889   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 46/120
	I0722 10:55:31.138143   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 47/120
	I0722 10:55:32.139432   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 48/120
	I0722 10:55:33.140599   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 49/120
	I0722 10:55:34.142719   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 50/120
	I0722 10:55:35.144219   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 51/120
	I0722 10:55:36.145571   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 52/120
	I0722 10:55:37.146847   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 53/120
	I0722 10:55:38.148091   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 54/120
	I0722 10:55:39.149782   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 55/120
	I0722 10:55:40.151039   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 56/120
	I0722 10:55:41.152204   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 57/120
	I0722 10:55:42.153623   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 58/120
	I0722 10:55:43.154954   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 59/120
	I0722 10:55:44.156837   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 60/120
	I0722 10:55:45.158200   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 61/120
	I0722 10:55:46.159741   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 62/120
	I0722 10:55:47.161402   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 63/120
	I0722 10:55:48.162710   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 64/120
	I0722 10:55:49.164418   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 65/120
	I0722 10:55:50.166229   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 66/120
	I0722 10:55:51.167619   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 67/120
	I0722 10:55:52.169034   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 68/120
	I0722 10:55:53.170747   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 69/120
	I0722 10:55:54.172039   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 70/120
	I0722 10:55:55.173440   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 71/120
	I0722 10:55:56.174619   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 72/120
	I0722 10:55:57.176165   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 73/120
	I0722 10:55:58.177502   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 74/120
	I0722 10:55:59.179156   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 75/120
	I0722 10:56:00.180377   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 76/120
	I0722 10:56:01.181715   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 77/120
	I0722 10:56:02.182858   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 78/120
	I0722 10:56:03.184199   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 79/120
	I0722 10:56:04.185814   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 80/120
	I0722 10:56:05.187573   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 81/120
	I0722 10:56:06.188976   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 82/120
	I0722 10:56:07.190240   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 83/120
	I0722 10:56:08.191306   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 84/120
	I0722 10:56:09.193194   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 85/120
	I0722 10:56:10.194410   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 86/120
	I0722 10:56:11.195896   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 87/120
	I0722 10:56:12.197196   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 88/120
	I0722 10:56:13.198484   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 89/120
	I0722 10:56:14.200203   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 90/120
	I0722 10:56:15.201472   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 91/120
	I0722 10:56:16.202921   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 92/120
	I0722 10:56:17.204951   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 93/120
	I0722 10:56:18.206223   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 94/120
	I0722 10:56:19.207948   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 95/120
	I0722 10:56:20.209289   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 96/120
	I0722 10:56:21.211026   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 97/120
	I0722 10:56:22.212342   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 98/120
	I0722 10:56:23.213762   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 99/120
	I0722 10:56:24.215283   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 100/120
	I0722 10:56:25.217427   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 101/120
	I0722 10:56:26.219021   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 102/120
	I0722 10:56:27.221195   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 103/120
	I0722 10:56:28.222401   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 104/120
	I0722 10:56:29.224539   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 105/120
	I0722 10:56:30.225813   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 106/120
	I0722 10:56:31.226995   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 107/120
	I0722 10:56:32.228247   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 108/120
	I0722 10:56:33.229505   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 109/120
	I0722 10:56:34.231107   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 110/120
	I0722 10:56:35.232410   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 111/120
	I0722 10:56:36.233719   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 112/120
	I0722 10:56:37.235010   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 113/120
	I0722 10:56:38.236341   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 114/120
	I0722 10:56:39.237626   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 115/120
	I0722 10:56:40.239196   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 116/120
	I0722 10:56:41.240633   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 117/120
	I0722 10:56:42.242691   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 118/120
	I0722 10:56:43.244197   30107 main.go:141] libmachine: (ha-461283-m03) Waiting for machine to stop 119/120
	I0722 10:56:44.245544   30107 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 10:56:44.245616   30107 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 10:56:44.247514   30107 out.go:177] 
	W0722 10:56:44.248793   30107 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 10:56:44.248808   30107 out.go:239] * 
	* 
	W0722 10:56:44.251769   30107 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 10:56:44.253093   30107 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-461283 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-461283 --wait=true -v=7 --alsologtostderr
E0722 10:57:59.657795   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:58:29.087161   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-461283 --wait=true -v=7 --alsologtostderr: (3m49.202172797s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-461283
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-461283 -n ha-461283
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-461283 logs -n 25: (1.752929998s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m04 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp testdata/cp-test.txt                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m04_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03:/home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m03 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-461283 node stop m02 -v=7                                                     | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-461283 node start m02 -v=7                                                    | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-461283 -v=7                                                           | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-461283 -v=7                                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-461283 --wait=true -v=7                                                    | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:56 UTC | 22 Jul 24 11:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-461283                                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 11:00 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:56:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:56:44.293597   30556 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:56:44.293698   30556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:56:44.293708   30556 out.go:304] Setting ErrFile to fd 2...
	I0722 10:56:44.293713   30556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:56:44.293921   30556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:56:44.294459   30556 out.go:298] Setting JSON to false
	I0722 10:56:44.295347   30556 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2356,"bootTime":1721643448,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:56:44.295400   30556 start.go:139] virtualization: kvm guest
	I0722 10:56:44.297333   30556 out.go:177] * [ha-461283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:56:44.298738   30556 notify.go:220] Checking for updates...
	I0722 10:56:44.298750   30556 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:56:44.300061   30556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:56:44.301242   30556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:56:44.302366   30556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:56:44.303486   30556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:56:44.304572   30556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:56:44.305965   30556 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:56:44.306077   30556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:56:44.306506   30556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:56:44.306558   30556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:56:44.321850   30556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 10:56:44.322216   30556 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:56:44.322722   30556 main.go:141] libmachine: Using API Version  1
	I0722 10:56:44.322743   30556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:56:44.323062   30556 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:56:44.323228   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.358001   30556 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 10:56:44.359102   30556 start.go:297] selected driver: kvm2
	I0722 10:56:44.359128   30556 start.go:901] validating driver "kvm2" against &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:56:44.359270   30556 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:56:44.359627   30556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:56:44.359717   30556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:56:44.374190   30556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:56:44.374944   30556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:56:44.374977   30556 cni.go:84] Creating CNI manager for ""
	I0722 10:56:44.374985   30556 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0722 10:56:44.375052   30556 start.go:340] cluster config:
	{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:56:44.375209   30556 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:56:44.376791   30556 out.go:177] * Starting "ha-461283" primary control-plane node in "ha-461283" cluster
	I0722 10:56:44.378095   30556 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:56:44.378126   30556 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:56:44.378136   30556 cache.go:56] Caching tarball of preloaded images
	I0722 10:56:44.378217   30556 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:56:44.378230   30556 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:56:44.378350   30556 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:56:44.378520   30556 start.go:360] acquireMachinesLock for ha-461283: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:56:44.378554   30556 start.go:364] duration metric: took 18.755µs to acquireMachinesLock for "ha-461283"
	I0722 10:56:44.378567   30556 start.go:96] Skipping create...Using existing machine configuration
	I0722 10:56:44.378574   30556 fix.go:54] fixHost starting: 
	I0722 10:56:44.378858   30556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:56:44.378904   30556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:56:44.392719   30556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0722 10:56:44.393119   30556 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:56:44.393608   30556 main.go:141] libmachine: Using API Version  1
	I0722 10:56:44.393631   30556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:56:44.393974   30556 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:56:44.394179   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.394331   30556 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:56:44.395643   30556 fix.go:112] recreateIfNeeded on ha-461283: state=Running err=<nil>
	W0722 10:56:44.395663   30556 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 10:56:44.397212   30556 out.go:177] * Updating the running kvm2 "ha-461283" VM ...
	I0722 10:56:44.398397   30556 machine.go:94] provisionDockerMachine start ...
	I0722 10:56:44.398413   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.398576   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.400658   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.401029   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.401062   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.401180   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.401338   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.401469   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.401612   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.401738   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.401943   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.401957   30556 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 10:56:44.505850   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:56:44.505878   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.506102   30556 buildroot.go:166] provisioning hostname "ha-461283"
	I0722 10:56:44.506130   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.506287   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.509050   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.509459   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.509481   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.509610   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.509777   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.509957   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.510079   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.510235   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.510392   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.510402   30556 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283 && echo "ha-461283" | sudo tee /etc/hostname
	I0722 10:56:44.633049   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:56:44.633077   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.635818   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.636210   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.636236   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.636422   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.636610   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.636792   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.636967   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.637165   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.637337   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.637353   30556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:56:44.741436   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:56:44.741464   30556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:56:44.741489   30556 buildroot.go:174] setting up certificates
	I0722 10:56:44.741499   30556 provision.go:84] configureAuth start
	I0722 10:56:44.741510   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.741731   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:56:44.744185   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.744575   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.744597   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.744703   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.746507   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.746804   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.746824   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.746977   30556 provision.go:143] copyHostCerts
	I0722 10:56:44.747012   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:56:44.747050   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:56:44.747061   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:56:44.747122   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:56:44.747250   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:56:44.747270   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:56:44.747277   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:56:44.747307   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:56:44.747378   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:56:44.747397   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:56:44.747406   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:56:44.747440   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:56:44.747490   30556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283 san=[127.0.0.1 192.168.39.43 ha-461283 localhost minikube]
	I0722 10:56:44.846180   30556 provision.go:177] copyRemoteCerts
	I0722 10:56:44.846230   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:56:44.846250   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.848578   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.848915   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.848954   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.849115   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.849279   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.849384   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.849482   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:56:44.932121   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:56:44.932199   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:56:44.959792   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:56:44.959866   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 10:56:44.985600   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:56:44.985667   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:56:45.010882   30556 provision.go:87] duration metric: took 269.357091ms to configureAuth
	I0722 10:56:45.010907   30556 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:56:45.011114   30556 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:56:45.011182   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:45.013730   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:45.014136   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:45.014164   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:45.014347   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:45.014519   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:45.014666   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:45.014813   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:45.014940   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:45.015080   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:45.015098   30556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:58:15.961072   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:58:15.961100   30556 machine.go:97] duration metric: took 1m31.562689759s to provisionDockerMachine
	I0722 10:58:15.961114   30556 start.go:293] postStartSetup for "ha-461283" (driver="kvm2")
	I0722 10:58:15.961129   30556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:58:15.961173   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:15.961483   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:58:15.961509   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:15.964279   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:15.964726   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:15.964745   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:15.964916   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:15.965113   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:15.965269   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:15.965392   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.048142   30556 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:58:16.052793   30556 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:58:16.052814   30556 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:58:16.052887   30556 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:58:16.052956   30556 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:58:16.052966   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:58:16.053043   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:58:16.062947   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:58:16.087530   30556 start.go:296] duration metric: took 126.401427ms for postStartSetup
	I0722 10:58:16.087568   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.087880   30556 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 10:58:16.087917   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.090341   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.090735   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.090761   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.090872   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.091058   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.091233   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.091361   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	W0722 10:58:16.172748   30556 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0722 10:58:16.172772   30556 fix.go:56] duration metric: took 1m31.794197231s for fixHost
	I0722 10:58:16.172797   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.175297   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.175650   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.175678   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.175792   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.176000   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.176152   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.176291   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.176454   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:58:16.176611   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:58:16.176621   30556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:58:16.281149   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645896.227619791
	
	I0722 10:58:16.281173   30556 fix.go:216] guest clock: 1721645896.227619791
	I0722 10:58:16.281190   30556 fix.go:229] Guest: 2024-07-22 10:58:16.227619791 +0000 UTC Remote: 2024-07-22 10:58:16.172780914 +0000 UTC m=+91.911323146 (delta=54.838877ms)
	I0722 10:58:16.281208   30556 fix.go:200] guest clock delta is within tolerance: 54.838877ms
	I0722 10:58:16.281212   30556 start.go:83] releasing machines lock for "ha-461283", held for 1m31.902650281s
	I0722 10:58:16.281230   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.281499   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:58:16.283794   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.284179   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.284216   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.284346   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.284839   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.285007   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.285103   30556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:58:16.285139   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.285174   30556 ssh_runner.go:195] Run: cat /version.json
	I0722 10:58:16.285196   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.287595   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.287929   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.287958   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.287974   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.288085   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.288240   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.288333   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.288349   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.288357   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.288533   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.288539   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.288696   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.288821   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.288974   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.387199   30556 ssh_runner.go:195] Run: systemctl --version
	I0722 10:58:16.393211   30556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:58:16.554896   30556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:58:16.562130   30556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:58:16.562191   30556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:58:16.572269   30556 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 10:58:16.572294   30556 start.go:495] detecting cgroup driver to use...
	I0722 10:58:16.572365   30556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:58:16.591559   30556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:58:16.613458   30556 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:58:16.613516   30556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:58:16.629634   30556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:58:16.647508   30556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:58:16.825084   30556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:58:16.992627   30556 docker.go:233] disabling docker service ...
	I0722 10:58:16.992698   30556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:58:17.010324   30556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:58:17.024874   30556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:58:17.172822   30556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:58:17.320679   30556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:58:17.344946   30556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:58:17.363236   30556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:58:17.363286   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.373384   30556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:58:17.373431   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.383546   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.393444   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.403689   30556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:58:17.413797   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.423796   30556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.434008   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.444942   30556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:58:17.453947   30556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:58:17.462864   30556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:58:17.609669   30556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:58:17.859336   30556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:58:17.859407   30556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:58:17.864618   30556 start.go:563] Will wait 60s for crictl version
	I0722 10:58:17.864680   30556 ssh_runner.go:195] Run: which crictl
	I0722 10:58:17.868449   30556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:58:17.902879   30556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:58:17.902963   30556 ssh_runner.go:195] Run: crio --version
	I0722 10:58:17.941305   30556 ssh_runner.go:195] Run: crio --version
	I0722 10:58:17.972942   30556 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:58:17.974424   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:58:17.977003   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:17.977443   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:17.977466   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:17.977696   30556 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:58:17.982428   30556 kubeadm.go:883] updating cluster {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:58:17.982550   30556 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:58:17.982587   30556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:58:18.023475   30556 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:58:18.023499   30556 crio.go:433] Images already preloaded, skipping extraction
	I0722 10:58:18.023552   30556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:58:18.060289   30556 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:58:18.060313   30556 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:58:18.060322   30556 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.30.3 crio true true} ...
	I0722 10:58:18.060433   30556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:58:18.060504   30556 ssh_runner.go:195] Run: crio config
	I0722 10:58:18.104870   30556 cni.go:84] Creating CNI manager for ""
	I0722 10:58:18.104892   30556 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0722 10:58:18.104903   30556 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:58:18.104926   30556 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-461283 NodeName:ha-461283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:58:18.105085   30556 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-461283"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:58:18.105111   30556 kube-vip.go:115] generating kube-vip config ...
	I0722 10:58:18.105147   30556 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:58:18.116340   30556 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:58:18.116450   30556 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:58:18.116508   30556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:58:18.126063   30556 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:58:18.126128   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 10:58:18.134980   30556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0722 10:58:18.151480   30556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:58:18.167023   30556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0722 10:58:18.183341   30556 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:58:18.201374   30556 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:58:18.205250   30556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:58:18.348233   30556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:58:18.363198   30556 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.43
	I0722 10:58:18.363216   30556 certs.go:194] generating shared ca certs ...
	I0722 10:58:18.363233   30556 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.363378   30556 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:58:18.363418   30556 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:58:18.363427   30556 certs.go:256] generating profile certs ...
	I0722 10:58:18.363504   30556 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:58:18.363532   30556 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f
	I0722 10:58:18.363547   30556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.127 192.168.39.254]
	I0722 10:58:18.578600   30556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f ...
	I0722 10:58:18.578633   30556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f: {Name:mk4d2f492b7ec7771aafc14b7c1acbc783e197ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.578810   30556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f ...
	I0722 10:58:18.578823   30556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f: {Name:mk42d4f337ea9970724178a867ba676d0b7166a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.578905   30556 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:58:18.579061   30556 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:58:18.579199   30556 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:58:18.579215   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:58:18.579230   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:58:18.579244   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:58:18.579259   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:58:18.579276   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:58:18.579291   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:58:18.579308   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:58:18.579322   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:58:18.579384   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:58:18.579415   30556 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:58:18.579426   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:58:18.579448   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:58:18.579487   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:58:18.579522   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:58:18.579563   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:58:18.579592   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.579608   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:58:18.579623   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:58:18.580176   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:58:18.604679   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:58:18.627775   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:58:18.651493   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:58:18.674406   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 10:58:18.696645   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 10:58:18.719044   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:58:18.740850   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:58:18.763232   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:58:18.786022   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:58:18.808895   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:58:18.831242   30556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:58:18.846985   30556 ssh_runner.go:195] Run: openssl version
	I0722 10:58:18.853303   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:58:18.863811   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.916760   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.916826   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.945480   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:58:18.973789   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:58:19.197308   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.258914   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.258988   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.328765   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:58:19.382917   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:58:19.410518   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.454817   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.454894   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.538821   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:58:19.688767   30556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:58:19.727644   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 10:58:19.804915   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 10:58:19.866213   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 10:58:19.897280   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 10:58:19.910329   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 10:58:19.970626   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 10:58:20.001932   30556 kubeadm.go:392] StartCluster: {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:58:20.002100   30556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:58:20.002187   30556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:58:20.107111   30556 cri.go:89] found id: "db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2"
	I0722 10:58:20.107139   30556 cri.go:89] found id: "394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd"
	I0722 10:58:20.107146   30556 cri.go:89] found id: "18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b"
	I0722 10:58:20.107152   30556 cri.go:89] found id: "ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde"
	I0722 10:58:20.107157   30556 cri.go:89] found id: "3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2"
	I0722 10:58:20.107163   30556 cri.go:89] found id: "b79330205b3b34929616350d75a92dfb6b89364825873410805dfc7c904ffe48"
	I0722 10:58:20.107168   30556 cri.go:89] found id: "55b27c32c654e8450ab3013a13dfb71de85f5bd30812faee5de5482a651d8eea"
	I0722 10:58:20.107173   30556 cri.go:89] found id: "239d38a66181bacbf4ff6f4b6c27636a837636afff840f23efb250862938263c"
	I0722 10:58:20.107178   30556 cri.go:89] found id: "5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719"
	I0722 10:58:20.107187   30556 cri.go:89] found id: "797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a"
	I0722 10:58:20.107192   30556 cri.go:89] found id: "165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb"
	I0722 10:58:20.107197   30556 cri.go:89] found id: "8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44"
	I0722 10:58:20.107202   30556 cri.go:89] found id: "70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240"
	I0722 10:58:20.107207   30556 cri.go:89] found id: "08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d"
	I0722 10:58:20.107214   30556 cri.go:89] found id: "dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08"
	I0722 10:58:20.107220   30556 cri.go:89] found id: ""
	I0722 10:58:20.107272   30556 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.089333678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646034089299885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76783fae-12e5-4716-a7da-f2353678945a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.090064043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a87d9e75-bd32-4bc8-be78-8d5e193aca7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.090127202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a87d9e75-bd32-4bc8-be78-8d5e193aca7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.090990913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a87d9e75-bd32-4bc8-be78-8d5e193aca7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.143688940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3eeae7ff-65cc-4581-a44f-a9cbc15be313 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.143763255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3eeae7ff-65cc-4581-a44f-a9cbc15be313 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.146467876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d78660a-677c-401e-b011-0366bc2a4c94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.146961266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646034146939230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d78660a-677c-401e-b011-0366bc2a4c94 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.147517928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e83e2c6-b5c3-44e3-99ec-a8a298c2a000 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.147601878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e83e2c6-b5c3-44e3-99ec-a8a298c2a000 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.148272851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e83e2c6-b5c3-44e3-99ec-a8a298c2a000 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.193169487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24fea708-d33b-417a-b989-d789066507fc name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.193316354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24fea708-d33b-417a-b989-d789066507fc name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.194638248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c964ba2c-a0c7-438c-b2f6-ea11f98cab9d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.195392825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646034195366895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c964ba2c-a0c7-438c-b2f6-ea11f98cab9d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.195957507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4347d8bd-956f-4404-8e75-9dadbb2f7f32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.196013595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4347d8bd-956f-4404-8e75-9dadbb2f7f32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.196760480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4347d8bd-956f-4404-8e75-9dadbb2f7f32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.240589512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7ff6020-ac47-4ffb-8fce-00343d346849 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.240680719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7ff6020-ac47-4ffb-8fce-00343d346849 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.241431042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d16b2b9-7a9f-4923-8c9f-09a0fdc8c404 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.242186274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646034242156138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d16b2b9-7a9f-4923-8c9f-09a0fdc8c404 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.242826979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=634e0673-078b-4e70-a2d5-c17dbddf4021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.242881650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=634e0673-078b-4e70-a2d5-c17dbddf4021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:00:34 ha-461283 crio[3751]: time="2024-07-22 11:00:34.243513705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=634e0673-078b-4e70-a2d5-c17dbddf4021 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8d31f2a8013d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       4                   d7505be00a4dd       storage-provisioner
	dedb5e16ff7ce       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   122f37260fa27       kube-controller-manager-ha-461283
	a4d3862fae152       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   1408cf32a9b11       kube-apiserver-ha-461283
	b666c66aefa1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   d7505be00a4dd       storage-provisioner
	d97907e6d9019       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6b9fbbc4ff4d1       busybox-fc5497c4f-hkw9v
	7c4fac42e6040       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   539884c662756       kube-vip-ha-461283
	37b8c278ca227       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   e5bfddee43aaf       kube-proxy-28zxf
	b354707c2b811       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   6f47141e78634       coredns-7db6d8ff4d-zb547
	db94009c521f9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   640d3f2649cc2       kindnet-hmrqh
	3e24b057cc5e5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   e1b9e5c8554a6       etcd-ha-461283
	394a8f4400ea3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e48aa2925bd6b       coredns-7db6d8ff4d-qrfdd
	18af36a6c7e03       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   122f37260fa27       kube-controller-manager-ha-461283
	ea5d5b8c8175c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   1408cf32a9b11       kube-apiserver-ha-461283
	3b03d6c4e851c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   767e5480a736f       kube-scheduler-ha-461283
	4e0d7d39c32b2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   816fd2e7cd706       busybox-fc5497c4f-hkw9v
	5920882be1f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   4723f41d773ba       coredns-7db6d8ff4d-zb547
	797ae9e61fe18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   0c2ec5e338fb3       coredns-7db6d8ff4d-qrfdd
	165b67d20aa98       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   e171bdcb5b84c       kindnet-hmrqh
	8ad5ed56ce259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   ffbce6c0af4bc       kube-proxy-28zxf
	70a36c3082983       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   54a1041d8e184       kube-scheduler-ha-461283
	dc7da6bdaabcb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   e5abe1a443195       etcd-ha-461283
	
	
	==> coredns [394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd] <==
	Trace[495896122]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:58:34.693)
	Trace[495896122]: [10.001240226s] [10.001240226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43554->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43550->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43550->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43554->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719] <==
	[INFO] 10.244.0.4:58821 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000212894s
	[INFO] 10.244.0.4:36629 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118072s
	[INFO] 10.244.0.4:39713 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00173787s
	[INFO] 10.244.2.2:34877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249226s
	[INFO] 10.244.2.2:47321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169139s
	[INFO] 10.244.2.2:37812 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009086884s
	[INFO] 10.244.2.2:48940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000477846s
	[INFO] 10.244.0.4:59919 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067175s
	[INFO] 10.244.2.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116023s
	[INFO] 10.244.2.2:46340 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079971s
	[INFO] 10.244.1.2:40840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133586s
	[INFO] 10.244.1.2:47315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158975s
	[INFO] 10.244.1.2:41268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093188s
	[INFO] 10.244.2.2:49311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014354s
	[INFO] 10.244.2.2:35152 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214208s
	[INFO] 10.244.1.2:60324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129417s
	[INFO] 10.244.1.2:58260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228807s
	[INFO] 10.244.1.2:39894 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113717s
	[INFO] 10.244.0.4:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152128s
	[INFO] 10.244.0.4:39699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074743s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1881&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1881&timeout=8m29s&timeoutSeconds=509&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1879&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a] <==
	[INFO] 10.244.1.2:50008 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060128s
	[INFO] 10.244.0.4:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828391s
	[INFO] 10.244.0.4:43357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054533s
	[INFO] 10.244.0.4:60216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000029938s
	[INFO] 10.244.0.4:48124 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001149366s
	[INFO] 10.244.0.4:34363 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035155s
	[INFO] 10.244.0.4:44217 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049654s
	[INFO] 10.244.0.4:35448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035288s
	[INFO] 10.244.2.2:42369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105863s
	[INFO] 10.244.2.2:51781 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069936s
	[INFO] 10.244.1.2:47904 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103521s
	[INFO] 10.244.0.4:49081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120239s
	[INFO] 10.244.0.4:40762 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121632s
	[INFO] 10.244.0.4:59110 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066206s
	[INFO] 10.244.0.4:39650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092772s
	[INFO] 10.244.2.2:51074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000265828s
	[INFO] 10.244.2.2:58192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130056s
	[INFO] 10.244.1.2:54053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255068s
	[INFO] 10.244.0.4:50225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074972s
	[INFO] 10.244.0.4:44950 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080101s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48558->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48558->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1297928674]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:58:31.367) (total time: 10319ms):
	Trace[1297928674]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer 10319ms (10:58:41.686)
	Trace[1297928674]: [10.31917915s] [10.31917915s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-461283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:00:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:59:03 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:59:03 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:59:03 +0000   Mon, 22 Jul 2024 10:47:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:59:03 +0000   Mon, 22 Jul 2024 10:47:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-461283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7adceecddbb41f7a81e4df2b7433c7b
	  System UUID:                f7adceec-ddbb-41f7-a81e-4df2b7433c7b
	  Boot ID:                    16bdd5e7-d27f-4ce8-a232-7bbe4c4337c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkw9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-qrfdd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-zb547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-461283                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hmrqh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-461283             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-461283    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-28zxf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-461283             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-461283                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 92s    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-461283 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-461283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-461283 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   NodeReady                12m    kubelet          Node ha-461283 status is now: NodeReady
	  Normal   RegisteredNode           11m    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Warning  ContainerGCFailed        3m18s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           81s    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           78s    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           27s    node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	
	
	Name:               ha-461283-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:00:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    ha-461283-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 164987e6e4bd4513b51bbf58f6e5b85b
	  System UUID:                164987e6-e4bd-4513-b51b-bf58f6e5b85b
	  Boot ID:                    11a321ea-198f-4688-be0d-666d749fed47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cgtcl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-461283-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-qsphb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-461283-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-461283-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-xkbsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-461283-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-461283-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  NodeNotReady             8m47s                node-controller  Node ha-461283-m02 status is now: NodeNotReady
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           78s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           27s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	
	
	Name:               ha-461283-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_49_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:49:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:00:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:00:08 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:00:08 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:00:08 +0000   Mon, 22 Jul 2024 10:49:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:00:08 +0000   Mon, 22 Jul 2024 10:49:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-461283-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 daecc7f26d194772811b43378358ae92
	  System UUID:                daecc7f2-6d19-4772-811b-43378358ae92
	  Boot ID:                    4e6dfd01-e3ac-4382-87b2-0af82d6c778c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bf5vn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-461283-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-9m2ms                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-461283-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-461283-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zdbjw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-461283-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-461283-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-461283-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-461283-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal   RegisteredNode           78s                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s                kubelet          Node ha-461283-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s                kubelet          Node ha-461283-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s                kubelet          Node ha-461283-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s                kubelet          Node ha-461283-m03 has been rebooted, boot id: 4e6dfd01-e3ac-4382-87b2-0af82d6c778c
	  Normal   RegisteredNode           27s                node-controller  Node ha-461283-m03 event: Registered Node ha-461283-m03 in Controller
	
	
	Name:               ha-461283-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_50_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:50:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:00:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:00:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:00:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:00:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-461283-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bf2f0ce1a340479f7577f27f1f3419
	  System UUID:                02bf2f0c-e1a3-4047-9f75-77f27f1f3419
	  Boot ID:                    ab1a4f0a-2ddd-4380-9855-5da6b113f11d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8h8rp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m58s
	  kube-system                 kube-proxy-q6mgq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m53s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m57s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           9m54s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   NodeReady                9m40s                  kubelet          Node ha-461283-m04 status is now: NodeReady
	  Normal   RegisteredNode           81s                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           78s                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   NodeNotReady             40s                    node-controller  Node ha-461283-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)        kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)        kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)        kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)        kubelet          Node ha-461283-m04 has been rebooted, boot id: ab1a4f0a-2ddd-4380-9855-5da6b113f11d
	  Normal   NodeReady                8s (x2 over 8s)        kubelet          Node ha-461283-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.217704] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.054835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059084] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.188930] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Jul22 10:47] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.257396] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.205609] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +3.948218] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.066710] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.986663] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.075913] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.885402] kauditd_printk_skb: 18 callbacks suppressed
	[ +22.062510] kauditd_printk_skb: 38 callbacks suppressed
	[Jul22 10:48] kauditd_printk_skb: 26 callbacks suppressed
	[Jul22 10:58] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.160551] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.204290] systemd-fstab-generator[3696]: Ignoring "noauto" option for root device
	[  +0.142387] systemd-fstab-generator[3708]: Ignoring "noauto" option for root device
	[  +0.291989] systemd-fstab-generator[3736]: Ignoring "noauto" option for root device
	[  +0.742591] systemd-fstab-generator[3838]: Ignoring "noauto" option for root device
	[ +12.873585] kauditd_printk_skb: 217 callbacks suppressed
	[ +10.061797] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.401778] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61] <==
	{"level":"warn","ts":"2024-07-22T10:59:32.780651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:59:32.79043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:59:32.890451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4537875a7ae50e01","from":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-22T10:59:34.791236Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:34.791338Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:35.910672Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:35.910717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:38.79297Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:38.793044Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:40.911436Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:40.911505Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:42.794934Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:42.795004Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:45.911528Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:45.911667Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8982c3555c8db6c3","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:46.796554Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T10:59:46.796596Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-22T10:59:48.697217Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.697326Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.69789Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.724663Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4537875a7ae50e01","to":"8982c3555c8db6c3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-22T10:59:48.724764Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.727033Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4537875a7ae50e01","to":"8982c3555c8db6c3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-22T10:59:48.727077Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:54.119953Z","caller":"traceutil/trace.go:171","msg":"trace[771011943] transaction","detail":"{read_only:false; response_revision:2325; number_of_response:1; }","duration":"152.86711ms","start":"2024-07-22T10:59:53.967066Z","end":"2024-07-22T10:59:54.119933Z","steps":["trace[771011943] 'process raft request'  (duration: 143.070627ms)"],"step_count":1}
	
	
	==> etcd [dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08] <==
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:56:45.262629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:56:45.262733Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:56:45.264463Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"4537875a7ae50e01","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26467Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.264722Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.264989Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265274Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26528Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265289Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265334Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265386Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.26542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265447Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265472Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.26815Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-07-22T10:56:45.268338Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-07-22T10:56:45.268371Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-461283","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
	
	
	==> kernel <==
	 11:00:34 up 13 min,  0 users,  load average: 0.23, 0.43, 0.30
	Linux ha-461283 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb] <==
	I0722 10:56:23.636761       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:23.636826       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:23.636972       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:23.636994       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:23.637066       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:23.637092       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:56:33.636862       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:33.636950       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:33.637131       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:33.637154       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:33.637235       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:33.637260       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:56:33.637325       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:56:33.637345       1 main.go:299] handling current node
	E0722 10:56:38.315412       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1881&timeout=7m42s&timeoutSeconds=462&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0722 10:56:41.387418       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1881": dial tcp 10.96.0.1:443: connect: no route to host
	E0722 10:56:41.387730       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1881": dial tcp 10.96.0.1:443: connect: no route to host
	I0722 10:56:43.637161       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:56:43.637303       1 main.go:299] handling current node
	I0722 10:56:43.637402       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:43.637428       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:43.637701       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:43.637744       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:43.637945       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:43.638028       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2] <==
	I0722 11:00:00.851869       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 11:00:10.851905       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:00:10.851959       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:00:10.852144       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 11:00:10.852172       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 11:00:10.852248       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:00:10.852268       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:00:10.852330       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:00:10.852360       1 main.go:299] handling current node
	I0722 11:00:20.851688       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:00:20.851948       1 main.go:299] handling current node
	I0722 11:00:20.852003       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:00:20.852035       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:00:20.852269       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 11:00:20.852322       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 11:00:20.852442       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:00:20.852485       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:00:30.853481       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:00:30.853675       1 main.go:299] handling current node
	I0722 11:00:30.853716       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:00:30.853747       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:00:30.854062       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 11:00:30.854132       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 11:00:30.854275       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:00:30.854315       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f] <==
	I0722 10:59:01.418683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:59:01.422872       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0722 10:59:01.422902       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0722 10:59:01.510654       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 10:59:01.512060       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 10:59:01.514156       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 10:59:01.517538       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 10:59:01.517636       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 10:59:01.517679       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 10:59:01.517700       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 10:59:01.523556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 10:59:01.523730       1 aggregator.go:165] initial CRD sync complete...
	I0722 10:59:01.523823       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 10:59:01.523850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:59:01.523872       1 cache.go:39] Caches are synced for autoregister controller
	W0722 10:59:01.529723       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127]
	I0722 10:59:01.538826       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 10:59:01.543336       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:59:01.543371       1 policy_source.go:224] refreshing policies
	I0722 10:59:01.616080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:59:01.631506       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:59:01.642514       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0722 10:59:01.651293       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0722 10:59:02.418224       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0722 10:59:02.777264       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.207 192.168.39.43]
	
	
	==> kube-apiserver [ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde] <==
	I0722 10:58:20.093333       1 options.go:221] external host was not specified, using 192.168.39.43
	I0722 10:58:20.097582       1 server.go:148] Version: v1.30.3
	I0722 10:58:20.097639       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:58:20.669994       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0722 10:58:20.673269       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:58:20.677737       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0722 10:58:20.677942       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0722 10:58:20.678175       1 instance.go:299] Using reconciler: lease
	W0722 10:58:40.667753       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0722 10:58:40.668015       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0722 10:58:40.679375       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b] <==
	I0722 10:58:21.228475       1 serving.go:380] Generated self-signed cert in-memory
	I0722 10:58:21.720222       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0722 10:58:21.720261       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:58:21.721936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:58:21.722040       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0722 10:58:21.722472       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0722 10:58:21.722544       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0722 10:58:41.725414       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.43:8443/healthz\": dial tcp 192.168.39.43:8443: connect: connection refused"
	
	
	==> kube-controller-manager [dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4] <==
	I0722 10:59:16.989072       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283"
	I0722 10:59:16.989145       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m02"
	I0722 10:59:16.989187       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m03"
	I0722 10:59:16.989232       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-461283-m04"
	I0722 10:59:16.989560       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0722 10:59:17.082954       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 10:59:17.087138       1 shared_informer.go:320] Caches are synced for namespace
	I0722 10:59:17.108374       1 shared_informer.go:320] Caches are synced for service account
	I0722 10:59:17.146623       1 shared_informer.go:320] Caches are synced for disruption
	I0722 10:59:17.147856       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:59:17.150096       1 shared_informer.go:320] Caches are synced for deployment
	I0722 10:59:17.151049       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:59:17.571716       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:59:17.584917       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:59:17.585016       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 10:59:24.241959       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-j4v7z\": the object has been modified; please apply your changes to the latest version and try again"
	I0722 10:59:24.242389       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"97c7df87-7608-41d0-a097-42928a86d743", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-j4v7z": the object has been modified; please apply your changes to the latest version and try again
	I0722 10:59:24.266628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="116.109548ms"
	I0722 10:59:24.308122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.366927ms"
	I0722 10:59:24.308348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.36µs"
	I0722 10:59:39.347569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.908419ms"
	I0722 10:59:39.347716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.768µs"
	I0722 10:59:56.438422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.223431ms"
	I0722 10:59:56.438603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.437µs"
	I0722 11:00:26.459523       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-461283-m04"
	
	
	==> kube-proxy [37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6] <==
	E0722 10:58:43.884474       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0722 10:59:02.317179       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0722 10:59:02.317570       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0722 10:59:02.359732       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:59:02.359864       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:59:02.359925       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:59:02.362684       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:59:02.363082       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:59:02.363150       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:59:02.364760       1 config.go:192] "Starting service config controller"
	I0722 10:59:02.364929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:59:02.365050       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:59:02.366307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:59:02.365108       1 config.go:319] "Starting node config controller"
	I0722 10:59:02.366444       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0722 10:59:05.387183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.387944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:59:05.387507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.388036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:59:05.387757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.388088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.387647       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0722 10:59:06.266897       1 shared_informer.go:320] Caches are synced for node config
	I0722 10:59:06.267237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:59:06.465645       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44] <==
	E0722 10:55:22.987431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:22.987219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:22.987481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:39.564382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:39.564725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:39.565208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:39.565331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:42.638144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:42.638400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:01.068221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:01.069296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:01.069042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:01.069926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:04.140412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:04.140558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:28.716026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:28.716243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:41.003438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:41.003583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2] <==
	W0722 10:58:56.928139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.43:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:56.928258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.43:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.333349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.333492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.412106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.412167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.567495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.43:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.567547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.43:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.621313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.43:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.621409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.43:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.728034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.43:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.728073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.43:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.889275       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.43:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.889432       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.43:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.914501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.43:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.914555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.43:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:58.479352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:58.479469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:59:01.445252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:59:01.445423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:59:01.445734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 10:59:01.445856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 10:59:01.446078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:59:01.446172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0722 10:59:18.596530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240] <==
	W0722 10:56:40.451431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:56:40.451516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 10:56:40.862330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:56:40.862422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:56:40.898484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:40.898572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:41.160342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:56:41.160391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:56:41.299295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:41.299386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:43.621130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:56:43.621201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:56:43.774446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:43.774546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.371291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.371377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.709710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.709743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.713666       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:56:44.713691       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:56:44.817289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.817333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.876646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:44.876701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:45.109125       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 10:58:59 ha-461283 kubelet[1372]: I0722 10:58:59.493397    1372 scope.go:117] "RemoveContainer" containerID="ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde"
	Jul 22 10:59:00 ha-461283 kubelet[1372]: I0722 10:59:00.998975    1372 scope.go:117] "RemoveContainer" containerID="92b7c4925ea54166cecf79b34ee0d3d4ddec32225996763b249fe2423bde1f03"
	Jul 22 10:59:00 ha-461283 kubelet[1372]: I0722 10:59:00.999621    1372 scope.go:117] "RemoveContainer" containerID="b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c"
	Jul 22 10:59:00 ha-461283 kubelet[1372]: E0722 10:59:00.999896    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a336a57b-330a-4251-8e33-2b277593a565)\"" pod="kube-system/storage-provisioner" podUID="a336a57b-330a-4251-8e33-2b277593a565"
	Jul 22 10:59:02 ha-461283 kubelet[1372]: I0722 10:59:02.315304    1372 status_manager.go:853] "Failed to get status for pod" podUID="2e6aa709297f0b149dac625c6b57cb57" pod="kube-system/kube-controller-manager-ha-461283" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-461283\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 22 10:59:04 ha-461283 kubelet[1372]: I0722 10:59:04.492757    1372 scope.go:117] "RemoveContainer" containerID="18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b"
	Jul 22 10:59:13 ha-461283 kubelet[1372]: I0722 10:59:13.492551    1372 scope.go:117] "RemoveContainer" containerID="b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c"
	Jul 22 10:59:13 ha-461283 kubelet[1372]: E0722 10:59:13.493477    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a336a57b-330a-4251-8e33-2b277593a565)\"" pod="kube-system/storage-provisioner" podUID="a336a57b-330a-4251-8e33-2b277593a565"
	Jul 22 10:59:16 ha-461283 kubelet[1372]: E0722 10:59:16.537215    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:59:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:59:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:59:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:59:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:59:16 ha-461283 kubelet[1372]: I0722 10:59:16.653147    1372 scope.go:117] "RemoveContainer" containerID="55b27c32c654e8450ab3013a13dfb71de85f5bd30812faee5de5482a651d8eea"
	Jul 22 10:59:28 ha-461283 kubelet[1372]: I0722 10:59:28.493734    1372 scope.go:117] "RemoveContainer" containerID="b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c"
	Jul 22 10:59:28 ha-461283 kubelet[1372]: E0722 10:59:28.497744    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a336a57b-330a-4251-8e33-2b277593a565)\"" pod="kube-system/storage-provisioner" podUID="a336a57b-330a-4251-8e33-2b277593a565"
	Jul 22 10:59:35 ha-461283 kubelet[1372]: I0722 10:59:35.492465    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-461283" podUID="244dde01-94fe-46c1-82f2-92ca2624750e"
	Jul 22 10:59:35 ha-461283 kubelet[1372]: I0722 10:59:35.513008    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-461283"
	Jul 22 10:59:42 ha-461283 kubelet[1372]: I0722 10:59:42.493601    1372 scope.go:117] "RemoveContainer" containerID="b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c"
	Jul 22 10:59:43 ha-461283 kubelet[1372]: I0722 10:59:43.854836    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-hkw9v" podStartSLOduration=583.987355246 podStartE2EDuration="9m44.854758669s" podCreationTimestamp="2024-07-22 10:49:59 +0000 UTC" firstStartedPulling="2024-07-22 10:50:01.688256121 +0000 UTC m=+165.331494388" lastFinishedPulling="2024-07-22 10:50:02.555659544 +0000 UTC m=+166.198897811" observedRunningTime="2024-07-22 10:50:03.163075072 +0000 UTC m=+166.806313358" watchObservedRunningTime="2024-07-22 10:59:43.854758669 +0000 UTC m=+747.497996947"
	Jul 22 11:00:16 ha-461283 kubelet[1372]: E0722 11:00:16.528957    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:00:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:00:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:00:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:00:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:00:33.810654   31893 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19313-5960/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-461283 -n ha-461283
helpers_test.go:261: (dbg) Run:  kubectl --context ha-461283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 stop -v=7 --alsologtostderr
E0722 11:01:36.611433   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 stop -v=7 --alsologtostderr: exit status 82 (2m0.453731696s)

                                                
                                                
-- stdout --
	* Stopping node "ha-461283-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:00:53.439742   32305 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:00:53.439881   32305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:00:53.439899   32305 out.go:304] Setting ErrFile to fd 2...
	I0722 11:00:53.439909   32305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:00:53.440139   32305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:00:53.440373   32305 out.go:298] Setting JSON to false
	I0722 11:00:53.440490   32305 mustload.go:65] Loading cluster: ha-461283
	I0722 11:00:53.440837   32305 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:00:53.440938   32305 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 11:00:53.441169   32305 mustload.go:65] Loading cluster: ha-461283
	I0722 11:00:53.441310   32305 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:00:53.441338   32305 stop.go:39] StopHost: ha-461283-m04
	I0722 11:00:53.441703   32305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:00:53.441747   32305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:00:53.456322   32305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0722 11:00:53.456758   32305 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:00:53.457320   32305 main.go:141] libmachine: Using API Version  1
	I0722 11:00:53.457354   32305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:00:53.457687   32305 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:00:53.459979   32305 out.go:177] * Stopping node "ha-461283-m04"  ...
	I0722 11:00:53.461608   32305 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 11:00:53.461644   32305 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 11:00:53.461849   32305 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 11:00:53.461891   32305 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 11:00:53.464547   32305 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:00:53.464930   32305 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 12:00:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 11:00:53.464959   32305 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:00:53.465097   32305 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 11:00:53.465277   32305 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 11:00:53.465428   32305 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 11:00:53.465613   32305 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	I0722 11:00:53.551079   32305 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 11:00:53.604606   32305 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 11:00:53.657402   32305 main.go:141] libmachine: Stopping "ha-461283-m04"...
	I0722 11:00:53.657441   32305 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 11:00:53.659098   32305 main.go:141] libmachine: (ha-461283-m04) Calling .Stop
	I0722 11:00:53.662506   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 0/120
	I0722 11:00:54.664473   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 1/120
	I0722 11:00:55.665845   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 2/120
	I0722 11:00:56.667183   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 3/120
	I0722 11:00:57.668478   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 4/120
	I0722 11:00:58.670378   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 5/120
	I0722 11:00:59.671577   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 6/120
	I0722 11:01:00.673417   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 7/120
	I0722 11:01:01.674576   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 8/120
	I0722 11:01:02.675991   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 9/120
	I0722 11:01:03.678058   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 10/120
	I0722 11:01:04.679367   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 11/120
	I0722 11:01:05.680710   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 12/120
	I0722 11:01:06.682853   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 13/120
	I0722 11:01:07.684279   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 14/120
	I0722 11:01:08.685965   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 15/120
	I0722 11:01:09.687521   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 16/120
	I0722 11:01:10.689805   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 17/120
	I0722 11:01:11.691046   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 18/120
	I0722 11:01:12.692145   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 19/120
	I0722 11:01:13.694055   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 20/120
	I0722 11:01:14.695226   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 21/120
	I0722 11:01:15.696668   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 22/120
	I0722 11:01:16.698996   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 23/120
	I0722 11:01:17.700336   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 24/120
	I0722 11:01:18.701740   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 25/120
	I0722 11:01:19.703024   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 26/120
	I0722 11:01:20.704410   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 27/120
	I0722 11:01:21.706198   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 28/120
	I0722 11:01:22.707599   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 29/120
	I0722 11:01:23.709459   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 30/120
	I0722 11:01:24.710887   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 31/120
	I0722 11:01:25.712162   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 32/120
	I0722 11:01:26.713816   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 33/120
	I0722 11:01:27.714997   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 34/120
	I0722 11:01:28.717045   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 35/120
	I0722 11:01:29.718962   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 36/120
	I0722 11:01:30.721165   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 37/120
	I0722 11:01:31.723126   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 38/120
	I0722 11:01:32.724764   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 39/120
	I0722 11:01:33.726967   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 40/120
	I0722 11:01:34.728481   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 41/120
	I0722 11:01:35.729951   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 42/120
	I0722 11:01:36.731950   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 43/120
	I0722 11:01:37.733198   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 44/120
	I0722 11:01:38.735030   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 45/120
	I0722 11:01:39.736501   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 46/120
	I0722 11:01:40.737721   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 47/120
	I0722 11:01:41.739225   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 48/120
	I0722 11:01:42.740678   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 49/120
	I0722 11:01:43.742772   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 50/120
	I0722 11:01:44.744326   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 51/120
	I0722 11:01:45.745620   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 52/120
	I0722 11:01:46.746939   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 53/120
	I0722 11:01:47.748219   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 54/120
	I0722 11:01:48.749949   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 55/120
	I0722 11:01:49.751070   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 56/120
	I0722 11:01:50.752354   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 57/120
	I0722 11:01:51.754326   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 58/120
	I0722 11:01:52.755728   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 59/120
	I0722 11:01:53.757151   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 60/120
	I0722 11:01:54.758597   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 61/120
	I0722 11:01:55.759744   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 62/120
	I0722 11:01:56.761550   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 63/120
	I0722 11:01:57.762636   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 64/120
	I0722 11:01:58.764794   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 65/120
	I0722 11:01:59.765936   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 66/120
	I0722 11:02:00.767857   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 67/120
	I0722 11:02:01.769346   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 68/120
	I0722 11:02:02.770436   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 69/120
	I0722 11:02:03.772239   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 70/120
	I0722 11:02:04.773428   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 71/120
	I0722 11:02:05.774608   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 72/120
	I0722 11:02:06.775959   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 73/120
	I0722 11:02:07.777416   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 74/120
	I0722 11:02:08.779079   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 75/120
	I0722 11:02:09.780321   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 76/120
	I0722 11:02:10.782011   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 77/120
	I0722 11:02:11.783163   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 78/120
	I0722 11:02:12.784462   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 79/120
	I0722 11:02:13.786361   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 80/120
	I0722 11:02:14.787687   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 81/120
	I0722 11:02:15.788861   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 82/120
	I0722 11:02:16.790748   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 83/120
	I0722 11:02:17.791959   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 84/120
	I0722 11:02:18.793628   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 85/120
	I0722 11:02:19.795220   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 86/120
	I0722 11:02:20.796544   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 87/120
	I0722 11:02:21.797961   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 88/120
	I0722 11:02:22.799252   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 89/120
	I0722 11:02:23.801171   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 90/120
	I0722 11:02:24.802979   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 91/120
	I0722 11:02:25.804358   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 92/120
	I0722 11:02:26.805566   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 93/120
	I0722 11:02:27.806806   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 94/120
	I0722 11:02:28.808692   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 95/120
	I0722 11:02:29.809963   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 96/120
	I0722 11:02:30.811292   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 97/120
	I0722 11:02:31.813005   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 98/120
	I0722 11:02:32.814871   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 99/120
	I0722 11:02:33.816949   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 100/120
	I0722 11:02:34.818894   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 101/120
	I0722 11:02:35.820087   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 102/120
	I0722 11:02:36.821916   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 103/120
	I0722 11:02:37.823216   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 104/120
	I0722 11:02:38.824988   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 105/120
	I0722 11:02:39.826195   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 106/120
	I0722 11:02:40.827287   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 107/120
	I0722 11:02:41.828419   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 108/120
	I0722 11:02:42.829956   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 109/120
	I0722 11:02:43.831992   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 110/120
	I0722 11:02:44.833264   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 111/120
	I0722 11:02:45.834726   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 112/120
	I0722 11:02:46.835915   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 113/120
	I0722 11:02:47.837279   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 114/120
	I0722 11:02:48.838790   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 115/120
	I0722 11:02:49.840056   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 116/120
	I0722 11:02:50.841213   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 117/120
	I0722 11:02:51.842788   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 118/120
	I0722 11:02:52.844265   32305 main.go:141] libmachine: (ha-461283-m04) Waiting for machine to stop 119/120
	I0722 11:02:53.844694   32305 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 11:02:53.844757   32305 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 11:02:53.846571   32305 out.go:177] 
	W0722 11:02:53.847795   32305 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 11:02:53.847811   32305 out.go:239] * 
	* 
	W0722 11:02:53.850571   32305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:02:53.851700   32305 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-461283 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr: exit status 3 (19.03215163s)

                                                
                                                
-- stdout --
	ha-461283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-461283-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:02:53.894889   32755 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:02:53.895004   32755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:02:53.895014   32755 out.go:304] Setting ErrFile to fd 2...
	I0722 11:02:53.895020   32755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:02:53.895262   32755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:02:53.895463   32755 out.go:298] Setting JSON to false
	I0722 11:02:53.895502   32755 mustload.go:65] Loading cluster: ha-461283
	I0722 11:02:53.895613   32755 notify.go:220] Checking for updates...
	I0722 11:02:53.895948   32755 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:02:53.895964   32755 status.go:255] checking status of ha-461283 ...
	I0722 11:02:53.896425   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:53.896470   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:53.915335   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0722 11:02:53.915700   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:53.916342   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:53.916397   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:53.916701   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:53.916856   32755 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 11:02:53.918102   32755 status.go:330] ha-461283 host status = "Running" (err=<nil>)
	I0722 11:02:53.918118   32755 host.go:66] Checking if "ha-461283" exists ...
	I0722 11:02:53.918392   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:53.918436   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:53.932553   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0722 11:02:53.932947   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:53.933357   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:53.933378   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:53.933719   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:53.933914   32755 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 11:02:53.936771   32755 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 11:02:53.937231   32755 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 11:02:53.937258   32755 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 11:02:53.937440   32755 host.go:66] Checking if "ha-461283" exists ...
	I0722 11:02:53.937746   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:53.937778   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:53.952204   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0722 11:02:53.952578   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:53.952988   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:53.953008   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:53.953336   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:53.953508   32755 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 11:02:53.953697   32755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 11:02:53.953718   32755 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 11:02:53.956139   32755 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 11:02:53.956521   32755 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 11:02:53.956546   32755 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 11:02:53.956724   32755 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 11:02:53.956891   32755 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 11:02:53.957001   32755 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 11:02:53.957133   32755 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 11:02:54.036827   32755 ssh_runner.go:195] Run: systemctl --version
	I0722 11:02:54.043266   32755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:02:54.060280   32755 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 11:02:54.060305   32755 api_server.go:166] Checking apiserver status ...
	I0722 11:02:54.060339   32755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:02:54.074885   32755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4971/cgroup
	W0722 11:02:54.084337   32755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:02:54.084371   32755 ssh_runner.go:195] Run: ls
	I0722 11:02:54.090203   32755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 11:02:54.094410   32755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 11:02:54.094436   32755 status.go:422] ha-461283 apiserver status = Running (err=<nil>)
	I0722 11:02:54.094450   32755 status.go:257] ha-461283 status: &{Name:ha-461283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 11:02:54.094473   32755 status.go:255] checking status of ha-461283-m02 ...
	I0722 11:02:54.094879   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.094921   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.109354   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0722 11:02:54.109706   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.110131   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.110150   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.110430   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.110588   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetState
	I0722 11:02:54.112037   32755 status.go:330] ha-461283-m02 host status = "Running" (err=<nil>)
	I0722 11:02:54.112052   32755 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 11:02:54.112408   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.112451   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.126304   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0722 11:02:54.126725   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.127150   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.127169   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.127473   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.127646   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetIP
	I0722 11:02:54.130169   32755 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 11:02:54.130573   32755 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:58:30 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 11:02:54.130600   32755 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 11:02:54.130726   32755 host.go:66] Checking if "ha-461283-m02" exists ...
	I0722 11:02:54.130997   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.131025   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.145811   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0722 11:02:54.146160   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.146587   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.146604   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.146869   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.147065   32755 main.go:141] libmachine: (ha-461283-m02) Calling .DriverName
	I0722 11:02:54.147252   32755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 11:02:54.147272   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHHostname
	I0722 11:02:54.149863   32755 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 11:02:54.150250   32755 main.go:141] libmachine: (ha-461283-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:59:21", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:58:30 +0000 UTC Type:0 Mac:52:54:00:a7:59:21 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-461283-m02 Clientid:01:52:54:00:a7:59:21}
	I0722 11:02:54.150269   32755 main.go:141] libmachine: (ha-461283-m02) DBG | domain ha-461283-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:59:21 in network mk-ha-461283
	I0722 11:02:54.150437   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHPort
	I0722 11:02:54.150586   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHKeyPath
	I0722 11:02:54.150728   32755 main.go:141] libmachine: (ha-461283-m02) Calling .GetSSHUsername
	I0722 11:02:54.150847   32755 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m02/id_rsa Username:docker}
	I0722 11:02:54.237438   32755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:02:54.253984   32755 kubeconfig.go:125] found "ha-461283" server: "https://192.168.39.254:8443"
	I0722 11:02:54.254015   32755 api_server.go:166] Checking apiserver status ...
	I0722 11:02:54.254049   32755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:02:54.270974   32755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0722 11:02:54.280834   32755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:02:54.280910   32755 ssh_runner.go:195] Run: ls
	I0722 11:02:54.285414   32755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0722 11:02:54.290014   32755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0722 11:02:54.290032   32755 status.go:422] ha-461283-m02 apiserver status = Running (err=<nil>)
	I0722 11:02:54.290039   32755 status.go:257] ha-461283-m02 status: &{Name:ha-461283-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 11:02:54.290052   32755 status.go:255] checking status of ha-461283-m04 ...
	I0722 11:02:54.290313   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.290341   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.304810   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0722 11:02:54.305255   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.305694   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.305713   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.305988   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.306166   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetState
	I0722 11:02:54.307652   32755 status.go:330] ha-461283-m04 host status = "Running" (err=<nil>)
	I0722 11:02:54.307668   32755 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 11:02:54.307979   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.308020   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.322601   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I0722 11:02:54.322936   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.323307   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.323328   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.323587   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.323784   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetIP
	I0722 11:02:54.326302   32755 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:02:54.326727   32755 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 12:00:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 11:02:54.326742   32755 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:02:54.326889   32755 host.go:66] Checking if "ha-461283-m04" exists ...
	I0722 11:02:54.327199   32755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:02:54.327232   32755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:02:54.341922   32755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0722 11:02:54.342243   32755 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:02:54.342679   32755 main.go:141] libmachine: Using API Version  1
	I0722 11:02:54.342701   32755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:02:54.343011   32755 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:02:54.343176   32755 main.go:141] libmachine: (ha-461283-m04) Calling .DriverName
	I0722 11:02:54.343357   32755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 11:02:54.343381   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHHostname
	I0722 11:02:54.346109   32755 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:02:54.346587   32755 main.go:141] libmachine: (ha-461283-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1f:b9", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 12:00:21 +0000 UTC Type:0 Mac:52:54:00:e8:1f:b9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-461283-m04 Clientid:01:52:54:00:e8:1f:b9}
	I0722 11:02:54.346614   32755 main.go:141] libmachine: (ha-461283-m04) DBG | domain ha-461283-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:e8:1f:b9 in network mk-ha-461283
	I0722 11:02:54.346771   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHPort
	I0722 11:02:54.346936   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHKeyPath
	I0722 11:02:54.347078   32755 main.go:141] libmachine: (ha-461283-m04) Calling .GetSSHUsername
	I0722 11:02:54.347200   32755 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283-m04/id_rsa Username:docker}
	W0722 11:03:12.884655   32755 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.250:22: connect: no route to host
	W0722 11:03:12.884762   32755 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.250:22: connect: no route to host
	E0722 11:03:12.884785   32755 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.250:22: connect: no route to host
	I0722 11:03:12.884794   32755 status.go:257] ha-461283-m04 status: &{Name:ha-461283-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0722 11:03:12.884817   32755 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.250:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-461283 -n ha-461283
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-461283 logs -n 25: (1.702671795s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m04 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp testdata/cp-test.txt                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283:/home/docker/cp-test_ha-461283-m04_ha-461283.txt                       |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283 sudo cat                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283.txt                                 |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m02:/home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m02 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m03:/home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n                                                                 | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | ha-461283-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-461283 ssh -n ha-461283-m03 sudo cat                                          | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC | 22 Jul 24 10:51 UTC |
	|         | /home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-461283 node stop m02 -v=7                                                     | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-461283 node start m02 -v=7                                                    | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-461283 -v=7                                                           | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-461283 -v=7                                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-461283 --wait=true -v=7                                                    | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 10:56 UTC | 22 Jul 24 11:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-461283                                                                | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 11:00 UTC |                     |
	| node    | ha-461283 node delete m03 -v=7                                                   | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 11:00 UTC | 22 Jul 24 11:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-461283 stop -v=7                                                              | ha-461283 | jenkins | v1.33.1 | 22 Jul 24 11:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:56:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:56:44.293597   30556 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:56:44.293698   30556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:56:44.293708   30556 out.go:304] Setting ErrFile to fd 2...
	I0722 10:56:44.293713   30556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:56:44.293921   30556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:56:44.294459   30556 out.go:298] Setting JSON to false
	I0722 10:56:44.295347   30556 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2356,"bootTime":1721643448,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:56:44.295400   30556 start.go:139] virtualization: kvm guest
	I0722 10:56:44.297333   30556 out.go:177] * [ha-461283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:56:44.298738   30556 notify.go:220] Checking for updates...
	I0722 10:56:44.298750   30556 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:56:44.300061   30556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:56:44.301242   30556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:56:44.302366   30556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:56:44.303486   30556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:56:44.304572   30556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:56:44.305965   30556 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:56:44.306077   30556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:56:44.306506   30556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:56:44.306558   30556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:56:44.321850   30556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 10:56:44.322216   30556 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:56:44.322722   30556 main.go:141] libmachine: Using API Version  1
	I0722 10:56:44.322743   30556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:56:44.323062   30556 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:56:44.323228   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.358001   30556 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 10:56:44.359102   30556 start.go:297] selected driver: kvm2
	I0722 10:56:44.359128   30556 start.go:901] validating driver "kvm2" against &{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:56:44.359270   30556 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:56:44.359627   30556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:56:44.359717   30556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:56:44.374190   30556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:56:44.374944   30556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 10:56:44.374977   30556 cni.go:84] Creating CNI manager for ""
	I0722 10:56:44.374985   30556 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0722 10:56:44.375052   30556 start.go:340] cluster config:
	{Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:56:44.375209   30556 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:56:44.376791   30556 out.go:177] * Starting "ha-461283" primary control-plane node in "ha-461283" cluster
	I0722 10:56:44.378095   30556 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:56:44.378126   30556 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:56:44.378136   30556 cache.go:56] Caching tarball of preloaded images
	I0722 10:56:44.378217   30556 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 10:56:44.378230   30556 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 10:56:44.378350   30556 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/config.json ...
	I0722 10:56:44.378520   30556 start.go:360] acquireMachinesLock for ha-461283: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 10:56:44.378554   30556 start.go:364] duration metric: took 18.755µs to acquireMachinesLock for "ha-461283"
	I0722 10:56:44.378567   30556 start.go:96] Skipping create...Using existing machine configuration
	I0722 10:56:44.378574   30556 fix.go:54] fixHost starting: 
	I0722 10:56:44.378858   30556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:56:44.378904   30556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:56:44.392719   30556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0722 10:56:44.393119   30556 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:56:44.393608   30556 main.go:141] libmachine: Using API Version  1
	I0722 10:56:44.393631   30556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:56:44.393974   30556 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:56:44.394179   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.394331   30556 main.go:141] libmachine: (ha-461283) Calling .GetState
	I0722 10:56:44.395643   30556 fix.go:112] recreateIfNeeded on ha-461283: state=Running err=<nil>
	W0722 10:56:44.395663   30556 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 10:56:44.397212   30556 out.go:177] * Updating the running kvm2 "ha-461283" VM ...
	I0722 10:56:44.398397   30556 machine.go:94] provisionDockerMachine start ...
	I0722 10:56:44.398413   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:56:44.398576   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.400658   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.401029   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.401062   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.401180   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.401338   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.401469   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.401612   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.401738   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.401943   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.401957   30556 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 10:56:44.505850   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:56:44.505878   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.506102   30556 buildroot.go:166] provisioning hostname "ha-461283"
	I0722 10:56:44.506130   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.506287   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.509050   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.509459   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.509481   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.509610   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.509777   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.509957   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.510079   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.510235   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.510392   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.510402   30556 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-461283 && echo "ha-461283" | sudo tee /etc/hostname
	I0722 10:56:44.633049   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-461283
	
	I0722 10:56:44.633077   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.635818   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.636210   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.636236   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.636422   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.636610   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.636792   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.636967   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.637165   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:44.637337   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:44.637353   30556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-461283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-461283/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-461283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 10:56:44.741436   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 10:56:44.741464   30556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 10:56:44.741489   30556 buildroot.go:174] setting up certificates
	I0722 10:56:44.741499   30556 provision.go:84] configureAuth start
	I0722 10:56:44.741510   30556 main.go:141] libmachine: (ha-461283) Calling .GetMachineName
	I0722 10:56:44.741731   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:56:44.744185   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.744575   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.744597   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.744703   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.746507   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.746804   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.746824   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.746977   30556 provision.go:143] copyHostCerts
	I0722 10:56:44.747012   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:56:44.747050   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 10:56:44.747061   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 10:56:44.747122   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 10:56:44.747250   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:56:44.747270   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 10:56:44.747277   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 10:56:44.747307   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 10:56:44.747378   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:56:44.747397   30556 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 10:56:44.747406   30556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 10:56:44.747440   30556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 10:56:44.747490   30556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.ha-461283 san=[127.0.0.1 192.168.39.43 ha-461283 localhost minikube]
	I0722 10:56:44.846180   30556 provision.go:177] copyRemoteCerts
	I0722 10:56:44.846230   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 10:56:44.846250   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:44.848578   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.848915   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:44.848954   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:44.849115   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:44.849279   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:44.849384   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:44.849482   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:56:44.932121   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 10:56:44.932199   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 10:56:44.959792   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 10:56:44.959866   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 10:56:44.985600   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 10:56:44.985667   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 10:56:45.010882   30556 provision.go:87] duration metric: took 269.357091ms to configureAuth
	I0722 10:56:45.010907   30556 buildroot.go:189] setting minikube options for container-runtime
	I0722 10:56:45.011114   30556 config.go:182] Loaded profile config "ha-461283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:56:45.011182   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:56:45.013730   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:45.014136   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:56:45.014164   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:56:45.014347   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:56:45.014519   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:45.014666   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:56:45.014813   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:56:45.014940   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:56:45.015080   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:56:45.015098   30556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 10:58:15.961072   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 10:58:15.961100   30556 machine.go:97] duration metric: took 1m31.562689759s to provisionDockerMachine
	I0722 10:58:15.961114   30556 start.go:293] postStartSetup for "ha-461283" (driver="kvm2")
	I0722 10:58:15.961129   30556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 10:58:15.961173   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:15.961483   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 10:58:15.961509   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:15.964279   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:15.964726   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:15.964745   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:15.964916   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:15.965113   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:15.965269   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:15.965392   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.048142   30556 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 10:58:16.052793   30556 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 10:58:16.052814   30556 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 10:58:16.052887   30556 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 10:58:16.052956   30556 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 10:58:16.052966   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 10:58:16.053043   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 10:58:16.062947   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:58:16.087530   30556 start.go:296] duration metric: took 126.401427ms for postStartSetup
	I0722 10:58:16.087568   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.087880   30556 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 10:58:16.087917   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.090341   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.090735   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.090761   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.090872   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.091058   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.091233   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.091361   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	W0722 10:58:16.172748   30556 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0722 10:58:16.172772   30556 fix.go:56] duration metric: took 1m31.794197231s for fixHost
	I0722 10:58:16.172797   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.175297   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.175650   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.175678   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.175792   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.176000   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.176152   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.176291   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.176454   30556 main.go:141] libmachine: Using SSH client type: native
	I0722 10:58:16.176611   30556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0722 10:58:16.176621   30556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 10:58:16.281149   30556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645896.227619791
	
	I0722 10:58:16.281173   30556 fix.go:216] guest clock: 1721645896.227619791
	I0722 10:58:16.281190   30556 fix.go:229] Guest: 2024-07-22 10:58:16.227619791 +0000 UTC Remote: 2024-07-22 10:58:16.172780914 +0000 UTC m=+91.911323146 (delta=54.838877ms)
	I0722 10:58:16.281208   30556 fix.go:200] guest clock delta is within tolerance: 54.838877ms
	I0722 10:58:16.281212   30556 start.go:83] releasing machines lock for "ha-461283", held for 1m31.902650281s
	I0722 10:58:16.281230   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.281499   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:58:16.283794   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.284179   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.284216   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.284346   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.284839   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.285007   30556 main.go:141] libmachine: (ha-461283) Calling .DriverName
	I0722 10:58:16.285103   30556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 10:58:16.285139   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.285174   30556 ssh_runner.go:195] Run: cat /version.json
	I0722 10:58:16.285196   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHHostname
	I0722 10:58:16.287595   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.287929   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.287958   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.287974   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.288085   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.288240   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.288333   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:16.288349   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.288357   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:16.288533   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHPort
	I0722 10:58:16.288539   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.288696   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHKeyPath
	I0722 10:58:16.288821   30556 main.go:141] libmachine: (ha-461283) Calling .GetSSHUsername
	I0722 10:58:16.288974   30556 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/ha-461283/id_rsa Username:docker}
	I0722 10:58:16.387199   30556 ssh_runner.go:195] Run: systemctl --version
	I0722 10:58:16.393211   30556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 10:58:16.554896   30556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 10:58:16.562130   30556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 10:58:16.562191   30556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 10:58:16.572269   30556 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 10:58:16.572294   30556 start.go:495] detecting cgroup driver to use...
	I0722 10:58:16.572365   30556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 10:58:16.591559   30556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 10:58:16.613458   30556 docker.go:217] disabling cri-docker service (if available) ...
	I0722 10:58:16.613516   30556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 10:58:16.629634   30556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 10:58:16.647508   30556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 10:58:16.825084   30556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 10:58:16.992627   30556 docker.go:233] disabling docker service ...
	I0722 10:58:16.992698   30556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 10:58:17.010324   30556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 10:58:17.024874   30556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 10:58:17.172822   30556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 10:58:17.320679   30556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 10:58:17.344946   30556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 10:58:17.363236   30556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 10:58:17.363286   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.373384   30556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 10:58:17.373431   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.383546   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.393444   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.403689   30556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 10:58:17.413797   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.423796   30556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.434008   30556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 10:58:17.444942   30556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 10:58:17.453947   30556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 10:58:17.462864   30556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:58:17.609669   30556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 10:58:17.859336   30556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 10:58:17.859407   30556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 10:58:17.864618   30556 start.go:563] Will wait 60s for crictl version
	I0722 10:58:17.864680   30556 ssh_runner.go:195] Run: which crictl
	I0722 10:58:17.868449   30556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 10:58:17.902879   30556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 10:58:17.902963   30556 ssh_runner.go:195] Run: crio --version
	I0722 10:58:17.941305   30556 ssh_runner.go:195] Run: crio --version
	I0722 10:58:17.972942   30556 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 10:58:17.974424   30556 main.go:141] libmachine: (ha-461283) Calling .GetIP
	I0722 10:58:17.977003   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:17.977443   30556 main.go:141] libmachine: (ha-461283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:42:30", ip: ""} in network mk-ha-461283: {Iface:virbr1 ExpiryTime:2024-07-22 11:46:52 +0000 UTC Type:0 Mac:52:54:00:1d:42:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-461283 Clientid:01:52:54:00:1d:42:30}
	I0722 10:58:17.977466   30556 main.go:141] libmachine: (ha-461283) DBG | domain ha-461283 has defined IP address 192.168.39.43 and MAC address 52:54:00:1d:42:30 in network mk-ha-461283
	I0722 10:58:17.977696   30556 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 10:58:17.982428   30556 kubeadm.go:883] updating cluster {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 10:58:17.982550   30556 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:58:17.982587   30556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:58:18.023475   30556 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:58:18.023499   30556 crio.go:433] Images already preloaded, skipping extraction
	I0722 10:58:18.023552   30556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 10:58:18.060289   30556 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 10:58:18.060313   30556 cache_images.go:84] Images are preloaded, skipping loading
	I0722 10:58:18.060322   30556 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.30.3 crio true true} ...
	I0722 10:58:18.060433   30556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-461283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 10:58:18.060504   30556 ssh_runner.go:195] Run: crio config
	I0722 10:58:18.104870   30556 cni.go:84] Creating CNI manager for ""
	I0722 10:58:18.104892   30556 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0722 10:58:18.104903   30556 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 10:58:18.104926   30556 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-461283 NodeName:ha-461283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 10:58:18.105085   30556 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-461283"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 10:58:18.105111   30556 kube-vip.go:115] generating kube-vip config ...
	I0722 10:58:18.105147   30556 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 10:58:18.116340   30556 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 10:58:18.116450   30556 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 10:58:18.116508   30556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 10:58:18.126063   30556 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 10:58:18.126128   30556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 10:58:18.134980   30556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0722 10:58:18.151480   30556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 10:58:18.167023   30556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0722 10:58:18.183341   30556 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 10:58:18.201374   30556 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0722 10:58:18.205250   30556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 10:58:18.348233   30556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 10:58:18.363198   30556 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283 for IP: 192.168.39.43
	I0722 10:58:18.363216   30556 certs.go:194] generating shared ca certs ...
	I0722 10:58:18.363233   30556 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.363378   30556 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 10:58:18.363418   30556 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 10:58:18.363427   30556 certs.go:256] generating profile certs ...
	I0722 10:58:18.363504   30556 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/client.key
	I0722 10:58:18.363532   30556 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f
	I0722 10:58:18.363547   30556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43 192.168.39.207 192.168.39.127 192.168.39.254]
	I0722 10:58:18.578600   30556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f ...
	I0722 10:58:18.578633   30556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f: {Name:mk4d2f492b7ec7771aafc14b7c1acbc783e197ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.578810   30556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f ...
	I0722 10:58:18.578823   30556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f: {Name:mk42d4f337ea9970724178a867ba676d0b7166a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 10:58:18.578905   30556 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt.cf901a5f -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt
	I0722 10:58:18.579061   30556 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key.cf901a5f -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key
	I0722 10:58:18.579199   30556 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key
	I0722 10:58:18.579215   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 10:58:18.579230   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 10:58:18.579244   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 10:58:18.579259   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 10:58:18.579276   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 10:58:18.579291   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 10:58:18.579308   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 10:58:18.579322   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 10:58:18.579384   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 10:58:18.579415   30556 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 10:58:18.579426   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 10:58:18.579448   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 10:58:18.579487   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 10:58:18.579522   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 10:58:18.579563   30556 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 10:58:18.579592   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.579608   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 10:58:18.579623   30556 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 10:58:18.580176   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 10:58:18.604679   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 10:58:18.627775   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 10:58:18.651493   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 10:58:18.674406   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 10:58:18.696645   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 10:58:18.719044   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 10:58:18.740850   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/ha-461283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 10:58:18.763232   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 10:58:18.786022   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 10:58:18.808895   30556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 10:58:18.831242   30556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 10:58:18.846985   30556 ssh_runner.go:195] Run: openssl version
	I0722 10:58:18.853303   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 10:58:18.863811   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.916760   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.916826   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 10:58:18.945480   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 10:58:18.973789   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 10:58:19.197308   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.258914   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.258988   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 10:58:19.328765   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 10:58:19.382917   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 10:58:19.410518   30556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.454817   30556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.454894   30556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 10:58:19.538821   30556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 10:58:19.688767   30556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 10:58:19.727644   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 10:58:19.804915   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 10:58:19.866213   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 10:58:19.897280   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 10:58:19.910329   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 10:58:19.970626   30556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 10:58:20.001932   30556 kubeadm.go:392] StartCluster: {Name:ha-461283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-461283 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.207 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.250 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:58:20.002100   30556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 10:58:20.002187   30556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 10:58:20.107111   30556 cri.go:89] found id: "db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2"
	I0722 10:58:20.107139   30556 cri.go:89] found id: "394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd"
	I0722 10:58:20.107146   30556 cri.go:89] found id: "18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b"
	I0722 10:58:20.107152   30556 cri.go:89] found id: "ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde"
	I0722 10:58:20.107157   30556 cri.go:89] found id: "3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2"
	I0722 10:58:20.107163   30556 cri.go:89] found id: "b79330205b3b34929616350d75a92dfb6b89364825873410805dfc7c904ffe48"
	I0722 10:58:20.107168   30556 cri.go:89] found id: "55b27c32c654e8450ab3013a13dfb71de85f5bd30812faee5de5482a651d8eea"
	I0722 10:58:20.107173   30556 cri.go:89] found id: "239d38a66181bacbf4ff6f4b6c27636a837636afff840f23efb250862938263c"
	I0722 10:58:20.107178   30556 cri.go:89] found id: "5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719"
	I0722 10:58:20.107187   30556 cri.go:89] found id: "797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a"
	I0722 10:58:20.107192   30556 cri.go:89] found id: "165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb"
	I0722 10:58:20.107197   30556 cri.go:89] found id: "8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44"
	I0722 10:58:20.107202   30556 cri.go:89] found id: "70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240"
	I0722 10:58:20.107207   30556 cri.go:89] found id: "08c8bf4f5df71e6a77d448b9212062f96e90349adbbad8f2e329463bd3e1884d"
	I0722 10:58:20.107214   30556 cri.go:89] found id: "dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08"
	I0722 10:58:20.107220   30556 cri.go:89] found id: ""
	I0722 10:58:20.107272   30556 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.459731703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646193459565569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=623aa223-4866-4d01-b3b6-2fdd888054af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.460539143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2929e5b-a9ba-4439-8138-fadfe62a0350 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.460609099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2929e5b-a9ba-4439-8138-fadfe62a0350 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.461063950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2929e5b-a9ba-4439-8138-fadfe62a0350 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.505258401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb20db99-2c42-429c-8459-cbb374c18f0a name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.505330286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb20db99-2c42-429c-8459-cbb374c18f0a name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.506872283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82db618a-86ca-4549-aac4-633cdceb92dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.507317999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646193507294221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82db618a-86ca-4549-aac4-633cdceb92dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.507886981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=815bff7a-98eb-49df-b64d-68ed176ab017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.507997612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=815bff7a-98eb-49df-b64d-68ed176ab017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.508461899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=815bff7a-98eb-49df-b64d-68ed176ab017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.550472558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecc92fcd-b184-4ce5-ad14-fb87ea57246f name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.550552295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecc92fcd-b184-4ce5-ad14-fb87ea57246f name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.552555920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0d6513b-3975-43bc-8311-a62944ad7a85 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.553124705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646193553094722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0d6513b-3975-43bc-8311-a62944ad7a85 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.554486789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bab80418-037f-40a1-bba2-4fdd9885bec3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.554565925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bab80418-037f-40a1-bba2-4fdd9885bec3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.555605314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bab80418-037f-40a1-bba2-4fdd9885bec3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.607017904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28671677-7db0-4f35-bf92-44ade6a47e76 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.607094445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28671677-7db0-4f35-bf92-44ade6a47e76 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.608165917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e652f669-bf19-4d41-bb39-ed29ae819eab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.608584751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721646193608563789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e652f669-bf19-4d41-bb39-ed29ae819eab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.609219745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e05553ef-652a-44b7-92dc-4dc943769480 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.609302761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e05553ef-652a-44b7-92dc-4dc943769480 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:03:13 ha-461283 crio[3751]: time="2024-07-22 11:03:13.609910363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d31f2a8013d2e24edc425273e44425f67e6b9bb2949a0bae5a2fc61a7180c0c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721645982504681132,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721645944506619235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721645939505166534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Annotations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b666c66aefa1df7559d97096e7dbcc4b6708df300667bc67439e21aebef5507c,PodSandboxId:d7505be00a4dd0017426e901923c85556c7e1aab9ce2236d3bdf2f58ab23daad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721645937505645302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a336a57b-330a-4251-8e33-2b277593a565,},Annotations:map[string]string{io.kubernetes.container.hash: bd6e55a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97907e6d90193f3e298b74cb4c1ffbf5728c2a1c0b4e9f3b92965be2e2bd229,PodSandboxId:6b9fbbc4ff4d170d0fbaa8ea3ce27d5acaa45194f4dbdcb8c21011da489de5ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721645932944146967,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4fac42e604059d8b7bb12caf1f0c51694a1b661ab66338849703b3fbb4795e,PodSandboxId:539884c662756c5287d2b1fb6603b44f5fc001982dc4a7c3e612abec844858f1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721645911704398704,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 784e450de252cbe54f11c8aea749b974,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6,PodSandboxId:e5bfddee43aaff99037f91d93444606703b272918e4137afe75feeabb3aa8498,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721645900810388135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef,PodSandboxId:6f47141e7863442f1f1e1503d29ca7cf4025d4a49f20e97deadfde14147edca1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899806127192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2,PodSandboxId:640d3f2649cc233feb8cb448344c36e0aa252e2c06d532d2bf22b136b8c0b86f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721645899564553650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61,PodSandboxId:e1b9e5c8554a61e601b07794864b22dcfadd7b2f1567561e191febd07d276299,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721645899554678964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd,PodSandboxId:e48aa2925bd6b9beb5f20038d85f152677c8328cbbfbe7c2cf8521228a46b709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721645899461053883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b,PodSandboxId:122f37260fa272181bc977c5fab28d93568ba79ff386bcead824d3b18fdf5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721645899340665928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2e6aa709297f0b149dac625c6b57cb57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2,PodSandboxId:767e5480a736f2cfadbd9af1f98ab5c4e9e00f4af6962bdcb8f0a372b607350a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721645899255967056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d2
0cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde,PodSandboxId:1408cf32a9b111ad0d05c077ec792e7280e1c819bdab24d4ec83ae665e7eb31d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721645899262523073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65c6dd74f375ef85f17419802adf158,},Ann
otations:map[string]string{io.kubernetes.container.hash: 2edba14b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e0d7d39c32b26c8cf125a279920668ca2e951905242a4868edd6a21fca1416c,PodSandboxId:816fd2e7cd706d322555f2f36fbd206ae986d8ed89be70bd1de6c5b649078cfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721645402571179847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hkw9v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264707a6-61a4-4941-b996-0bebde73d4c7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37762312,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719,PodSandboxId:4723f41d773ba6946b0397961e08b81adf4a47a279dd5f445f56c2a16eef40bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264374702822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zb547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54886641-9710-4355-86ff-016ad48b5cd5,},Annotations:map[string]string{io.kube
rnetes.container.hash: 5638ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a,PodSandboxId:0c2ec5e338fb367db8b1a2ad528c7af8ed202e4bbaeab5159468a25321378cae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721645264350452857,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrfdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c9698a-e97d-4b8a-ab71-f19003b5dcfd,},Annotations:map[string]string{io.kubernetes.container.hash: 42b57e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb,PodSandboxId:e171bdcb5b84c953c7434405b97b99a193f09e165b7345f7c8825f108427dce6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721645252505387479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmrqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe55aff-7926-481f-90cd-3cc209d79f63,},Annotations:map[string]string{io.kubernetes.container.hash: 319ce76b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44,PodSandboxId:ffbce6c0af4bc0412cbe807c70c74257bbd3514d683fc4a1672124862f6298c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721645250607555032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28zxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5894062f-0d05-45f4-88eb-da134f234e2d,},Annotations:map[string]string{io.kubernetes.container.hash: 99fd72e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240,PodSandboxId:54a1041d8e184916319d562d688313e1c1dd4452462b22bf88d27c23483b8d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721645230463280784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd68c8359ff10d20cdb4765063e7406,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08,PodSandboxId:e5abe1a4431950fd7a30c4d790682bbf3541b7faca55396453e5041f929979a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721645230331861897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-461283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0608fb2357421a8f67d141afb485ed21,},Annotations:map[string]string{io.kubernetes.container.hash: 17f9fe56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e05553ef-652a-44b7-92dc-4dc943769480 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d31f2a8013d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   d7505be00a4dd       storage-provisioner
	dedb5e16ff7ce       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   122f37260fa27       kube-controller-manager-ha-461283
	a4d3862fae152       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   1408cf32a9b11       kube-apiserver-ha-461283
	b666c66aefa1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   d7505be00a4dd       storage-provisioner
	d97907e6d9019       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6b9fbbc4ff4d1       busybox-fc5497c4f-hkw9v
	7c4fac42e6040       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   539884c662756       kube-vip-ha-461283
	37b8c278ca227       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   e5bfddee43aaf       kube-proxy-28zxf
	b354707c2b811       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   6f47141e78634       coredns-7db6d8ff4d-zb547
	db94009c521f9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   640d3f2649cc2       kindnet-hmrqh
	3e24b057cc5e5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   e1b9e5c8554a6       etcd-ha-461283
	394a8f4400ea3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   e48aa2925bd6b       coredns-7db6d8ff4d-qrfdd
	18af36a6c7e03       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   122f37260fa27       kube-controller-manager-ha-461283
	ea5d5b8c8175c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   1408cf32a9b11       kube-apiserver-ha-461283
	3b03d6c4e851c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   767e5480a736f       kube-scheduler-ha-461283
	4e0d7d39c32b2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   816fd2e7cd706       busybox-fc5497c4f-hkw9v
	5920882be1f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   4723f41d773ba       coredns-7db6d8ff4d-zb547
	797ae9e61fe18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   0c2ec5e338fb3       coredns-7db6d8ff4d-qrfdd
	165b67d20aa98       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   e171bdcb5b84c       kindnet-hmrqh
	8ad5ed56ce259       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   ffbce6c0af4bc       kube-proxy-28zxf
	70a36c3082983       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   54a1041d8e184       kube-scheduler-ha-461283
	dc7da6bdaabcb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   e5abe1a443195       etcd-ha-461283
	
	
	==> coredns [394a8f4400ea3af4e39dfd78d7b3e2d8915e8e032aaab6f96a2cc89c41384bfd] <==
	Trace[495896122]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:58:34.693)
	Trace[495896122]: [10.001240226s] [10.001240226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43554->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43550->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43550->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43554->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5920882be1f91f6531e2097bff538d865097c3c1e9b36427f6866dd437d75719] <==
	[INFO] 10.244.0.4:58821 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000212894s
	[INFO] 10.244.0.4:36629 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118072s
	[INFO] 10.244.0.4:39713 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00173787s
	[INFO] 10.244.2.2:34877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249226s
	[INFO] 10.244.2.2:47321 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169139s
	[INFO] 10.244.2.2:37812 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009086884s
	[INFO] 10.244.2.2:48940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000477846s
	[INFO] 10.244.0.4:59919 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067175s
	[INFO] 10.244.2.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116023s
	[INFO] 10.244.2.2:46340 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079971s
	[INFO] 10.244.1.2:40840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133586s
	[INFO] 10.244.1.2:47315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158975s
	[INFO] 10.244.1.2:41268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093188s
	[INFO] 10.244.2.2:49311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014354s
	[INFO] 10.244.2.2:35152 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214208s
	[INFO] 10.244.1.2:60324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129417s
	[INFO] 10.244.1.2:58260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228807s
	[INFO] 10.244.1.2:39894 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113717s
	[INFO] 10.244.0.4:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152128s
	[INFO] 10.244.0.4:39699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074743s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1881&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1881&timeout=8m29s&timeoutSeconds=509&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1879&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [797ae9e61fe185bd4350f285f2651899377f6d485b39bbc48a09eae425d8f21a] <==
	[INFO] 10.244.1.2:50008 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060128s
	[INFO] 10.244.0.4:57021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828391s
	[INFO] 10.244.0.4:43357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054533s
	[INFO] 10.244.0.4:60216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000029938s
	[INFO] 10.244.0.4:48124 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001149366s
	[INFO] 10.244.0.4:34363 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035155s
	[INFO] 10.244.0.4:44217 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049654s
	[INFO] 10.244.0.4:35448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035288s
	[INFO] 10.244.2.2:42369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105863s
	[INFO] 10.244.2.2:51781 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069936s
	[INFO] 10.244.1.2:47904 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103521s
	[INFO] 10.244.0.4:49081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120239s
	[INFO] 10.244.0.4:40762 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121632s
	[INFO] 10.244.0.4:59110 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066206s
	[INFO] 10.244.0.4:39650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092772s
	[INFO] 10.244.2.2:51074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000265828s
	[INFO] 10.244.2.2:58192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130056s
	[INFO] 10.244.1.2:54053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255068s
	[INFO] 10.244.0.4:50225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074972s
	[INFO] 10.244.0.4:44950 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080101s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b354707c2b811ad6c903db093fc012a7903131693e84bd68bec08a423d37bfef] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48558->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48558->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1297928674]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:58:31.367) (total time: 10319ms):
	Trace[1297928674]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer 10319ms (10:58:41.686)
	Trace[1297928674]: [10.31917915s] [10.31917915s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54442->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-461283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T10_47_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:01:55 +0000   Mon, 22 Jul 2024 11:01:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:01:55 +0000   Mon, 22 Jul 2024 11:01:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:01:55 +0000   Mon, 22 Jul 2024 11:01:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:01:55 +0000   Mon, 22 Jul 2024 11:01:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-461283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7adceecddbb41f7a81e4df2b7433c7b
	  System UUID:                f7adceec-ddbb-41f7-a81e-4df2b7433c7b
	  Boot ID:                    16bdd5e7-d27f-4ce8-a232-7bbe4c4337c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkw9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-qrfdd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-zb547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-461283                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-hmrqh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-461283             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-461283    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-28zxf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-461283             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-461283                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 4m11s              kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Warning  ContainerGCFailed        5m58s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m1s               node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           3m58s              node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   RegisteredNode           3m7s               node-controller  Node ha-461283 event: Registered Node ha-461283 in Controller
	  Normal   NodeNotReady             105s               node-controller  Node ha-461283 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     79s (x2 over 15m)  kubelet          Node ha-461283 status is now: NodeHasSufficientPID
	  Normal   NodeReady                79s (x2 over 15m)  kubelet          Node ha-461283 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    79s (x2 over 15m)  kubelet          Node ha-461283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  79s (x2 over 15m)  kubelet          Node ha-461283 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-461283-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_48_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:03:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:59:44 +0000   Mon, 22 Jul 2024 10:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    ha-461283-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 164987e6e4bd4513b51bbf58f6e5b85b
	  System UUID:                164987e6-e4bd-4513-b51b-bf58f6e5b85b
	  Boot ID:                    11a321ea-198f-4688-be0d-666d749fed47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cgtcl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-461283-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-qsphb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-461283-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-461283-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-xkbsx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-461283-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-461283-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-461283-m02 status is now: NodeNotReady
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s (x8 over 4m31s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node ha-461283-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node ha-461283-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-461283-m02 event: Registered Node ha-461283-m02 in Controller
	
	
	Name:               ha-461283-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-461283-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-461283
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T10_50_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:50:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-461283-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:00:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:01:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:01:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:01:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 11:00:26 +0000   Mon, 22 Jul 2024 11:01:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-461283-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bf2f0ce1a340479f7577f27f1f3419
	  System UUID:                02bf2f0c-e1a3-4047-9f75-77f27f1f3419
	  Boot ID:                    ab1a4f0a-2ddd-4380-9855-5da6b113f11d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fr84h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-8h8rp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-q6mgq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-461283-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   RegisteredNode           3m58s                  node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   NodeNotReady             3m20s                  node-controller  Node ha-461283-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-461283-m04 event: Registered Node ha-461283-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-461283-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-461283-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-461283-m04 has been rebooted, boot id: ab1a4f0a-2ddd-4380-9855-5da6b113f11d
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-461283-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-461283-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.217704] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.054835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059084] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.188930] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Jul22 10:47] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.257396] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.205609] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +3.948218] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.066710] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.986663] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.075913] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.885402] kauditd_printk_skb: 18 callbacks suppressed
	[ +22.062510] kauditd_printk_skb: 38 callbacks suppressed
	[Jul22 10:48] kauditd_printk_skb: 26 callbacks suppressed
	[Jul22 10:58] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.160551] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.204290] systemd-fstab-generator[3696]: Ignoring "noauto" option for root device
	[  +0.142387] systemd-fstab-generator[3708]: Ignoring "noauto" option for root device
	[  +0.291989] systemd-fstab-generator[3736]: Ignoring "noauto" option for root device
	[  +0.742591] systemd-fstab-generator[3838]: Ignoring "noauto" option for root device
	[ +12.873585] kauditd_printk_skb: 217 callbacks suppressed
	[ +10.061797] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.401778] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [3e24b057cc5e5b35b2d3967b6d4f607365955ef314fda4827d9bef6dbe115f61] <==
	{"level":"warn","ts":"2024-07-22T10:59:46.796596Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8982c3555c8db6c3","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-22T10:59:48.697217Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.697326Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.69789Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.724663Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4537875a7ae50e01","to":"8982c3555c8db6c3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-22T10:59:48.724764Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:48.727033Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4537875a7ae50e01","to":"8982c3555c8db6c3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-22T10:59:48.727077Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:59:54.119953Z","caller":"traceutil/trace.go:171","msg":"trace[771011943] transaction","detail":"{read_only:false; response_revision:2325; number_of_response:1; }","duration":"152.86711ms","start":"2024-07-22T10:59:53.967066Z","end":"2024-07-22T10:59:54.119933Z","steps":["trace[771011943] 'process raft request'  (duration: 143.070627ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:00:39.913173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 switched to configuration voters=(244508712777637344 4987603935014751745)"}
	{"level":"info","ts":"2024-07-22T11:00:39.915069Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","removed-remote-peer-id":"8982c3555c8db6c3","removed-remote-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-07-22T11:00:39.915187Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"warn","ts":"2024-07-22T11:00:39.915404Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T11:00:39.915561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"warn","ts":"2024-07-22T11:00:39.915928Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T11:00:39.91599Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T11:00:39.91611Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"warn","ts":"2024-07-22T11:00:39.9164Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3","error":"context canceled"}
	{"level":"warn","ts":"2024-07-22T11:00:39.916489Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8982c3555c8db6c3","error":"failed to read 8982c3555c8db6c3 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-22T11:00:39.916601Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"warn","ts":"2024-07-22T11:00:39.917123Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3","error":"context canceled"}
	{"level":"info","ts":"2024-07-22T11:00:39.917301Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T11:00:39.917368Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T11:00:39.917405Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"4537875a7ae50e01","removed-remote-peer-id":"8982c3555c8db6c3"}
	{"level":"warn","ts":"2024-07-22T11:00:39.932293Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"4537875a7ae50e01","remote-peer-id-stream-handler":"4537875a7ae50e01","remote-peer-id-from":"8982c3555c8db6c3"}
	
	
	==> etcd [dc7da6bdaabcb5dc082897661c5604f2ea1d8fe64d1efcfa5f7154017ad3aa08] <==
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/22 10:56:45 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:56:45.262629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:56:45.262733Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:56:45.264463Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"4537875a7ae50e01","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26467Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.264722Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.264989Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.265274Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"364ab60f9995de0"}
	{"level":"info","ts":"2024-07-22T10:56:45.26528Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265289Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265334Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265386Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.26542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265447Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4537875a7ae50e01","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.265472Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8982c3555c8db6c3"}
	{"level":"info","ts":"2024-07-22T10:56:45.26815Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-07-22T10:56:45.268338Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-07-22T10:56:45.268371Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-461283","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
	
	
	==> kernel <==
	 11:03:14 up 16 min,  0 users,  load average: 0.24, 0.35, 0.28
	Linux ha-461283 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [165b67d20aa984644398e9481f48147ad5f0216b128edde50a0f501cf415adcb] <==
	I0722 10:56:23.636761       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:23.636826       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:23.636972       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:23.636994       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:23.637066       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:23.637092       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:56:33.636862       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:33.636950       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:33.637131       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:33.637154       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:33.637235       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:33.637260       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 10:56:33.637325       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:56:33.637345       1 main.go:299] handling current node
	E0722 10:56:38.315412       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1881&timeout=7m42s&timeoutSeconds=462&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0722 10:56:41.387418       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1881": dial tcp 10.96.0.1:443: connect: no route to host
	E0722 10:56:41.387730       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1881": dial tcp 10.96.0.1:443: connect: no route to host
	I0722 10:56:43.637161       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 10:56:43.637303       1 main.go:299] handling current node
	I0722 10:56:43.637402       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 10:56:43.637428       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 10:56:43.637701       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0722 10:56:43.637744       1 main.go:322] Node ha-461283-m03 has CIDR [10.244.2.0/24] 
	I0722 10:56:43.637945       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 10:56:43.638028       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [db94009c521f9822effbf7134357899e47ffe347dcd2c5651040fefa4cca33b2] <==
	I0722 11:02:30.860076       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:02:40.860299       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:02:40.860407       1 main.go:299] handling current node
	I0722 11:02:40.860438       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:02:40.860456       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:02:40.860610       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:02:40.860646       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:02:50.852462       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:02:50.852595       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:02:50.852859       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:02:50.852913       1 main.go:299] handling current node
	I0722 11:02:50.852960       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:02:50.852979       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:03:00.860562       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:03:00.860632       1 main.go:299] handling current node
	I0722 11:03:00.860649       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:03:00.860655       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:03:00.860946       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:03:00.860974       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:03:10.859271       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0722 11:03:10.859410       1 main.go:322] Node ha-461283-m02 has CIDR [10.244.1.0/24] 
	I0722 11:03:10.859562       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0722 11:03:10.859603       1 main.go:322] Node ha-461283-m04 has CIDR [10.244.3.0/24] 
	I0722 11:03:10.859716       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0722 11:03:10.859737       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a4d3862fae152922959e5745226d0d0346254a37fa19be0adbe29b48b98ca54f] <==
	I0722 10:59:01.418683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:59:01.422872       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0722 10:59:01.422902       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0722 10:59:01.510654       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 10:59:01.512060       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 10:59:01.514156       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 10:59:01.517538       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 10:59:01.517636       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 10:59:01.517679       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 10:59:01.517700       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 10:59:01.523556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 10:59:01.523730       1 aggregator.go:165] initial CRD sync complete...
	I0722 10:59:01.523823       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 10:59:01.523850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:59:01.523872       1 cache.go:39] Caches are synced for autoregister controller
	W0722 10:59:01.529723       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127]
	I0722 10:59:01.538826       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 10:59:01.543336       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:59:01.543371       1 policy_source.go:224] refreshing policies
	I0722 10:59:01.616080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:59:01.631506       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:59:01.642514       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0722 10:59:01.651293       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0722 10:59:02.418224       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0722 10:59:02.777264       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.207 192.168.39.43]
	
	
	==> kube-apiserver [ea5d5b8c8175c7f9b6fa74f3c735f5b5e950f3cae2432c981a515095aff7cbde] <==
	I0722 10:58:20.093333       1 options.go:221] external host was not specified, using 192.168.39.43
	I0722 10:58:20.097582       1 server.go:148] Version: v1.30.3
	I0722 10:58:20.097639       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:58:20.669994       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0722 10:58:20.673269       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:58:20.677737       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0722 10:58:20.677942       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0722 10:58:20.678175       1 instance.go:299] Using reconciler: lease
	W0722 10:58:40.667753       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0722 10:58:40.668015       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0722 10:58:40.679375       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [18af36a6c7e03b1d474476ce8bb6a180b7f5593bf25ef5639e35ee96f7fd9b7b] <==
	I0722 10:58:21.228475       1 serving.go:380] Generated self-signed cert in-memory
	I0722 10:58:21.720222       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0722 10:58:21.720261       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:58:21.721936       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:58:21.722040       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0722 10:58:21.722472       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0722 10:58:21.722544       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0722 10:58:41.725414       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.43:8443/healthz\": dial tcp 192.168.39.43:8443: connect: connection refused"
	
	
	==> kube-controller-manager [dedb5e16ff7cefc4f7b2de3aab3f3666890577319fff80f0892bd25b07235ee4] <==
	E0722 11:00:56.937733       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:00:56.937742       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:01:16.938224       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:01:16.938269       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:01:16.938276       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:01:16.938281       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	E0722 11:01:16.938285       1 gc_controller.go:153] "Failed to get node" err="node \"ha-461283-m03\" not found" logger="pod-garbage-collector-controller" node="ha-461283-m03"
	I0722 11:01:27.128412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.439387ms"
	I0722 11:01:27.128644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.983µs"
	I0722 11:01:29.229965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.811068ms"
	I0722 11:01:29.230734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.459µs"
	I0722 11:01:29.338377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.185674ms"
	I0722 11:01:29.339661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="153.913µs"
	I0722 11:01:29.367307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.419031ms"
	I0722 11:01:29.372096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="116.189µs"
	I0722 11:01:56.565126       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-j4v7z\": the object has been modified; please apply your changes to the latest version and try again"
	I0722 11:01:56.565439       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"97c7df87-7608-41d0-a097-42928a86d743", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-j4v7z": the object has been modified; please apply your changes to the latest version and try again
	I0722 11:01:56.593062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.130888ms"
	I0722 11:01:56.593173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.678µs"
	I0722 11:01:56.666067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.470772ms"
	I0722 11:01:56.666266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.942µs"
	I0722 11:01:56.666631       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-j4v7z\": the object has been modified; please apply your changes to the latest version and try again"
	I0722 11:01:56.667024       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"97c7df87-7608-41d0-a097-42928a86d743", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-j4v7z EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-j4v7z": the object has been modified; please apply your changes to the latest version and try again
	I0722 11:01:56.727629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.738305ms"
	I0722 11:01:56.729555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.523µs"
	
	
	==> kube-proxy [37b8c278ca2273efc18e02c03ef51d705a9f50bc891b8d9a87cd8017ec61ffa6] <==
	E0722 10:58:43.884474       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0722 10:59:02.317179       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0722 10:59:02.317570       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0722 10:59:02.359732       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:59:02.359864       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:59:02.359925       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:59:02.362684       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:59:02.363082       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:59:02.363150       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:59:02.364760       1 config.go:192] "Starting service config controller"
	I0722 10:59:02.364929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:59:02.365050       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:59:02.366307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:59:02.365108       1 config.go:319] "Starting node config controller"
	I0722 10:59:02.366444       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0722 10:59:05.387183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.387944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:59:05.387507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.388036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:59:05.387757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.388088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:59:05.387647       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0722 10:59:06.266897       1 shared_informer.go:320] Caches are synced for node config
	I0722 10:59:06.267237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:59:06.465645       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [8ad5ed56ce2591bf669f2bd165149da31f654a654a1f18f519e6244af7ce5b44] <==
	E0722 10:55:22.987431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:22.987219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:22.987481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:29.707316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:29.707403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:39.564382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:39.564725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:39.565208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:39.565331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:55:42.638144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:55:42.638400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:01.068221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:01.069296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:01.069042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:01.069926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1813": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:04.140412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:04.140558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:28.716026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:28.716243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0722 10:56:41.003438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	E0722 10:56:41.003583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-461283&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [3b03d6c4e851c5841d4c4a484805940e2d56e4bfef5f95ad9c5c11320f5ce2e2] <==
	W0722 10:58:57.567495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.43:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.567547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.43:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.621313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.43:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.621409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.43:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.728034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.43:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.728073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.43:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.889275       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.43:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.889432       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.43:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:57.914501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.43:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:57.914555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.43:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:58:58.479352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	E0722 10:58:58.479469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.43:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.43:8443: connect: connection refused
	W0722 10:59:01.445252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:59:01.445423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:59:01.445734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 10:59:01.445856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 10:59:01.446078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:59:01.446172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0722 10:59:18.596530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 11:00:36.552593       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qpsh2\": pod busybox-fc5497c4f-qpsh2 is already assigned to node \"ha-461283-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qpsh2" node="ha-461283-m04"
	E0722 11:00:36.552929       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qpsh2\": pod busybox-fc5497c4f-qpsh2 is already assigned to node \"ha-461283-m04\"" pod="default/busybox-fc5497c4f-qpsh2"
	E0722 11:00:37.738748       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fr84h\": pod busybox-fc5497c4f-fr84h is already assigned to node \"ha-461283-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-fr84h" node="ha-461283-m04"
	E0722 11:00:37.738921       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5944c3a1-bf00-4b5a-8f04-82ac973e5026(default/busybox-fc5497c4f-fr84h) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-fr84h"
	E0722 11:00:37.738952       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fr84h\": pod busybox-fc5497c4f-fr84h is already assigned to node \"ha-461283-m04\"" pod="default/busybox-fc5497c4f-fr84h"
	I0722 11:00:37.738986       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fr84h" node="ha-461283-m04"
	
	
	==> kube-scheduler [70a36c3082983acafb3df32e1fdb259bd950bd73f1e2fd9dc0079ec0053a2240] <==
	W0722 10:56:40.451431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:56:40.451516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 10:56:40.862330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:56:40.862422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 10:56:40.898484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:40.898572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:41.160342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:56:41.160391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:56:41.299295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:41.299386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:43.621130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:56:43.621201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:56:43.774446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:43.774546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.371291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.371377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.709710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.709743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.713666       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:56:44.713691       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:56:44.817289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:44.817333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:44.876646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:44.876701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:45.109125       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 11:01:18 ha-461283 kubelet[1372]: E0722 11:01:18.295367    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:25 ha-461283 kubelet[1372]: E0722 11:01:25.391396    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-461283\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:28 ha-461283 kubelet[1372]: E0722 11:01:28.296003    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:35 ha-461283 kubelet[1372]: E0722 11:01:35.392708    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-461283\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:38 ha-461283 kubelet[1372]: E0722 11:01:38.296946    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:45 ha-461283 kubelet[1372]: E0722 11:01:45.393599    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-461283\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:45 ha-461283 kubelet[1372]: E0722 11:01:45.393650    1372 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 22 11:01:48 ha-461283 kubelet[1372]: E0722 11:01:48.297675    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-461283?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 22 11:01:48 ha-461283 kubelet[1372]: I0722 11:01:48.298121    1372 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417154    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: E0722 11:01:54.417203    1372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-461283?timeout=10s\": http2: client connection lost" interval="200ms"
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417267    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417292    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417308    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417328    1372 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417351    1372 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417367    1372 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417386    1372 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:54 ha-461283 kubelet[1372]: W0722 11:01:54.417402    1372 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 22 11:01:55 ha-461283 kubelet[1372]: I0722 11:01:55.771861    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-461283" podStartSLOduration=140.771830171 podStartE2EDuration="2m20.771830171s" podCreationTimestamp="2024-07-22 10:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-22 10:59:43.87741904 +0000 UTC m=+747.520657349" watchObservedRunningTime="2024-07-22 11:01:55.771830171 +0000 UTC m=+879.415068457"
	Jul 22 11:02:16 ha-461283 kubelet[1372]: E0722 11:02:16.529440    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:02:16 ha-461283 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:02:16 ha-461283 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:02:16 ha-461283 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:02:16 ha-461283 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:03:13.195179   32917 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19313-5960/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-461283 -n ha-461283
helpers_test.go:261: (dbg) Run:  kubectl --context ha-461283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025157
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-025157
E0722 11:18:29.088825   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-025157: exit status 82 (2m1.863614383s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-025157-m03"  ...
	* Stopping node "multinode-025157-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-025157" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025157 --wait=true -v=8 --alsologtostderr
E0722 11:21:32.133369   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 11:21:36.612186   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025157 --wait=true -v=8 --alsologtostderr: (3m19.030600342s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025157
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-025157 -n multinode-025157
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-025157 logs -n 25: (1.464235072s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157:/home/docker/cp-test_multinode-025157-m02_multinode-025157.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157 sudo cat                                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m02_multinode-025157.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03:/home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157-m03 sudo cat                                   | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp testdata/cp-test.txt                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157:/home/docker/cp-test_multinode-025157-m03_multinode-025157.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157 sudo cat                                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m03_multinode-025157.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02:/home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157-m02 sudo cat                                   | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-025157 node stop m03                                                          | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	| node    | multinode-025157 node start                                                             | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-025157                                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:17 UTC |                     |
	| stop    | -p multinode-025157                                                                     | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:17 UTC |                     |
	| start   | -p multinode-025157                                                                     | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:19 UTC | 22 Jul 24 11:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-025157                                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:19:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:19:28.940446   42537 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:19:28.940685   42537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:19:28.940694   42537 out.go:304] Setting ErrFile to fd 2...
	I0722 11:19:28.940698   42537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:19:28.940915   42537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:19:28.941477   42537 out.go:298] Setting JSON to false
	I0722 11:19:28.942336   42537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3721,"bootTime":1721643448,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:19:28.942389   42537 start.go:139] virtualization: kvm guest
	I0722 11:19:28.944497   42537 out.go:177] * [multinode-025157] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:19:28.946055   42537 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:19:28.946062   42537 notify.go:220] Checking for updates...
	I0722 11:19:28.947401   42537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:19:28.948672   42537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:19:28.949955   42537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:19:28.951238   42537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:19:28.952427   42537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:19:28.953971   42537 config.go:182] Loaded profile config "multinode-025157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:19:28.954062   42537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:19:28.954469   42537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:19:28.954532   42537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:19:28.970503   42537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
	I0722 11:19:28.970893   42537 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:19:28.971401   42537 main.go:141] libmachine: Using API Version  1
	I0722 11:19:28.971424   42537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:19:28.971734   42537 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:19:28.971900   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.006534   42537 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:19:29.007598   42537 start.go:297] selected driver: kvm2
	I0722 11:19:29.007614   42537 start.go:901] validating driver "kvm2" against &{Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:19:29.007752   42537 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:19:29.008077   42537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:19:29.008147   42537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:19:29.022299   42537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:19:29.022947   42537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:19:29.022973   42537 cni.go:84] Creating CNI manager for ""
	I0722 11:19:29.022980   42537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 11:19:29.023075   42537 start.go:340] cluster config:
	{Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:19:29.023217   42537 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:19:29.024811   42537 out.go:177] * Starting "multinode-025157" primary control-plane node in "multinode-025157" cluster
	I0722 11:19:29.025859   42537 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:19:29.025892   42537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:19:29.025902   42537 cache.go:56] Caching tarball of preloaded images
	I0722 11:19:29.025974   42537 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:19:29.025985   42537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:19:29.026095   42537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/config.json ...
	I0722 11:19:29.026269   42537 start.go:360] acquireMachinesLock for multinode-025157: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:19:29.026308   42537 start.go:364] duration metric: took 23.362µs to acquireMachinesLock for "multinode-025157"
	I0722 11:19:29.026324   42537 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:19:29.026331   42537 fix.go:54] fixHost starting: 
	I0722 11:19:29.026559   42537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:19:29.026589   42537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:19:29.039750   42537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0722 11:19:29.040179   42537 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:19:29.040639   42537 main.go:141] libmachine: Using API Version  1
	I0722 11:19:29.040660   42537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:19:29.041007   42537 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:19:29.041167   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.041306   42537 main.go:141] libmachine: (multinode-025157) Calling .GetState
	I0722 11:19:29.042992   42537 fix.go:112] recreateIfNeeded on multinode-025157: state=Running err=<nil>
	W0722 11:19:29.043014   42537 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:19:29.044770   42537 out.go:177] * Updating the running kvm2 "multinode-025157" VM ...
	I0722 11:19:29.045928   42537 machine.go:94] provisionDockerMachine start ...
	I0722 11:19:29.045942   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.046105   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.048762   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.049224   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.049251   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.049380   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.049510   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.049629   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.049773   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.049941   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.050149   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.050160   42537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:19:29.165276   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025157
	
	I0722 11:19:29.165303   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.165541   42537 buildroot.go:166] provisioning hostname "multinode-025157"
	I0722 11:19:29.165563   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.165734   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.168107   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.168463   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.168489   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.168627   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.168807   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.168970   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.169097   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.169269   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.169456   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.169473   42537 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025157 && echo "multinode-025157" | sudo tee /etc/hostname
	I0722 11:19:29.298983   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025157
	
	I0722 11:19:29.299006   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.301852   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.302163   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.302191   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.302369   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.302534   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.302670   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.302773   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.302914   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.303067   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.303082   42537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025157/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:19:29.417111   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:19:29.417153   42537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:19:29.417172   42537 buildroot.go:174] setting up certificates
	I0722 11:19:29.417183   42537 provision.go:84] configureAuth start
	I0722 11:19:29.417196   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.417440   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:19:29.420100   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.420472   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.420492   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.420624   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.422573   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.422851   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.422880   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.423008   42537 provision.go:143] copyHostCerts
	I0722 11:19:29.423040   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:19:29.423071   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:19:29.423082   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:19:29.423150   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:19:29.423220   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:19:29.423239   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:19:29.423246   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:19:29.423270   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:19:29.423308   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:19:29.423323   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:19:29.423332   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:19:29.423353   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:19:29.423394   42537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.multinode-025157 san=[127.0.0.1 192.168.39.158 localhost minikube multinode-025157]
	I0722 11:19:29.573434   42537 provision.go:177] copyRemoteCerts
	I0722 11:19:29.573492   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:19:29.573516   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.576337   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.576724   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.576749   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.576952   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.577149   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.577290   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.577419   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:19:29.664064   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 11:19:29.664123   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:19:29.688486   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 11:19:29.688553   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 11:19:29.713460   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 11:19:29.713524   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:19:29.737998   42537 provision.go:87] duration metric: took 320.802381ms to configureAuth
	I0722 11:19:29.738024   42537 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:19:29.738216   42537 config.go:182] Loaded profile config "multinode-025157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:19:29.738278   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.741159   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.741547   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.741578   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.741730   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.741937   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.742098   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.742258   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.742415   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.742573   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.742588   42537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:21:00.596780   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:21:00.596811   42537 machine.go:97] duration metric: took 1m31.550872531s to provisionDockerMachine
	I0722 11:21:00.596822   42537 start.go:293] postStartSetup for "multinode-025157" (driver="kvm2")
	I0722 11:21:00.596843   42537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:21:00.596858   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.597214   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:21:00.597249   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.600268   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.600701   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.600727   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.600833   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.600997   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.601146   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.601300   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.686695   42537 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:21:00.690774   42537 command_runner.go:130] > NAME=Buildroot
	I0722 11:21:00.690795   42537 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 11:21:00.690801   42537 command_runner.go:130] > ID=buildroot
	I0722 11:21:00.690808   42537 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 11:21:00.690815   42537 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 11:21:00.690942   42537 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:21:00.690968   42537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:21:00.691029   42537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:21:00.691127   42537 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:21:00.691147   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 11:21:00.691250   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:21:00.700630   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:21:00.725131   42537 start.go:296] duration metric: took 128.296952ms for postStartSetup
	I0722 11:21:00.725180   42537 fix.go:56] duration metric: took 1m31.698847619s for fixHost
	I0722 11:21:00.725206   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.727581   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.727884   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.727917   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.728059   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.728275   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.728433   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.728580   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.728748   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:21:00.728899   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:21:00.728909   42537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:21:00.840916   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721647260.805135741
	
	I0722 11:21:00.840942   42537 fix.go:216] guest clock: 1721647260.805135741
	I0722 11:21:00.840953   42537 fix.go:229] Guest: 2024-07-22 11:21:00.805135741 +0000 UTC Remote: 2024-07-22 11:21:00.725187922 +0000 UTC m=+91.817074186 (delta=79.947819ms)
	I0722 11:21:00.841003   42537 fix.go:200] guest clock delta is within tolerance: 79.947819ms
	I0722 11:21:00.841014   42537 start.go:83] releasing machines lock for "multinode-025157", held for 1m31.814696704s
	I0722 11:21:00.841042   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.841284   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:21:00.843841   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.844226   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.844254   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.844410   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.844914   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.845079   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.845143   42537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:21:00.845195   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.845322   42537 ssh_runner.go:195] Run: cat /version.json
	I0722 11:21:00.845346   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.847793   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848073   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848160   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.848185   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848284   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.848424   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.848445   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848456   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.848622   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.848629   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.848830   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.848861   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.848974   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.849113   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.960240   42537 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0722 11:21:00.960906   42537 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 11:21:00.961098   42537 ssh_runner.go:195] Run: systemctl --version
	I0722 11:21:00.966849   42537 command_runner.go:130] > systemd 252 (252)
	I0722 11:21:00.966889   42537 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 11:21:00.966933   42537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:21:01.121958   42537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 11:21:01.128969   42537 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 11:21:01.129068   42537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:21:01.129122   42537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:21:01.138300   42537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 11:21:01.138316   42537 start.go:495] detecting cgroup driver to use...
	I0722 11:21:01.138369   42537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:21:01.156166   42537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:21:01.169523   42537 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:21:01.169563   42537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:21:01.182893   42537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:21:01.196040   42537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:21:01.346539   42537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:21:01.489776   42537 docker.go:233] disabling docker service ...
	I0722 11:21:01.489855   42537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:21:01.509896   42537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:21:01.523880   42537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:21:01.669671   42537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:21:01.818848   42537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:21:01.833156   42537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:21:01.851042   42537 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0722 11:21:01.851335   42537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:21:01.851388   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.861459   42537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:21:01.861522   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.871283   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.881195   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.890990   42537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:21:01.901596   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.911997   42537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.923462   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.934514   42537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:21:01.944374   42537 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 11:21:01.944429   42537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:21:01.954074   42537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:21:02.088313   42537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:21:03.517176   42537 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.428825339s)
	I0722 11:21:03.517204   42537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:21:03.517255   42537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:21:03.522308   42537 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0722 11:21:03.522327   42537 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 11:21:03.522336   42537 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0722 11:21:03.522346   42537 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 11:21:03.522354   42537 command_runner.go:130] > Access: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522362   42537 command_runner.go:130] > Modify: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522373   42537 command_runner.go:130] > Change: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522382   42537 command_runner.go:130] >  Birth: -
	I0722 11:21:03.522404   42537 start.go:563] Will wait 60s for crictl version
	I0722 11:21:03.522444   42537 ssh_runner.go:195] Run: which crictl
	I0722 11:21:03.526170   42537 command_runner.go:130] > /usr/bin/crictl
	I0722 11:21:03.526217   42537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:21:03.560726   42537 command_runner.go:130] > Version:  0.1.0
	I0722 11:21:03.560751   42537 command_runner.go:130] > RuntimeName:  cri-o
	I0722 11:21:03.560757   42537 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0722 11:21:03.560764   42537 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 11:21:03.560786   42537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:21:03.560852   42537 ssh_runner.go:195] Run: crio --version
	I0722 11:21:03.589061   42537 command_runner.go:130] > crio version 1.29.1
	I0722 11:21:03.589078   42537 command_runner.go:130] > Version:        1.29.1
	I0722 11:21:03.589084   42537 command_runner.go:130] > GitCommit:      unknown
	I0722 11:21:03.589089   42537 command_runner.go:130] > GitCommitDate:  unknown
	I0722 11:21:03.589092   42537 command_runner.go:130] > GitTreeState:   clean
	I0722 11:21:03.589097   42537 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 11:21:03.589102   42537 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 11:21:03.589106   42537 command_runner.go:130] > Compiler:       gc
	I0722 11:21:03.589110   42537 command_runner.go:130] > Platform:       linux/amd64
	I0722 11:21:03.589114   42537 command_runner.go:130] > Linkmode:       dynamic
	I0722 11:21:03.589119   42537 command_runner.go:130] > BuildTags:      
	I0722 11:21:03.589125   42537 command_runner.go:130] >   containers_image_ostree_stub
	I0722 11:21:03.589130   42537 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 11:21:03.589135   42537 command_runner.go:130] >   btrfs_noversion
	I0722 11:21:03.589142   42537 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 11:21:03.589158   42537 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 11:21:03.589163   42537 command_runner.go:130] >   seccomp
	I0722 11:21:03.589172   42537 command_runner.go:130] > LDFlags:          unknown
	I0722 11:21:03.589177   42537 command_runner.go:130] > SeccompEnabled:   true
	I0722 11:21:03.589181   42537 command_runner.go:130] > AppArmorEnabled:  false
	I0722 11:21:03.589284   42537 ssh_runner.go:195] Run: crio --version
	I0722 11:21:03.622360   42537 command_runner.go:130] > crio version 1.29.1
	I0722 11:21:03.622383   42537 command_runner.go:130] > Version:        1.29.1
	I0722 11:21:03.622391   42537 command_runner.go:130] > GitCommit:      unknown
	I0722 11:21:03.622398   42537 command_runner.go:130] > GitCommitDate:  unknown
	I0722 11:21:03.622404   42537 command_runner.go:130] > GitTreeState:   clean
	I0722 11:21:03.622412   42537 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 11:21:03.622417   42537 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 11:21:03.622421   42537 command_runner.go:130] > Compiler:       gc
	I0722 11:21:03.622425   42537 command_runner.go:130] > Platform:       linux/amd64
	I0722 11:21:03.622430   42537 command_runner.go:130] > Linkmode:       dynamic
	I0722 11:21:03.622440   42537 command_runner.go:130] > BuildTags:      
	I0722 11:21:03.622446   42537 command_runner.go:130] >   containers_image_ostree_stub
	I0722 11:21:03.622452   42537 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 11:21:03.622458   42537 command_runner.go:130] >   btrfs_noversion
	I0722 11:21:03.622466   42537 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 11:21:03.622476   42537 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 11:21:03.622484   42537 command_runner.go:130] >   seccomp
	I0722 11:21:03.622494   42537 command_runner.go:130] > LDFlags:          unknown
	I0722 11:21:03.622500   42537 command_runner.go:130] > SeccompEnabled:   true
	I0722 11:21:03.622512   42537 command_runner.go:130] > AppArmorEnabled:  false
	I0722 11:21:03.625182   42537 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:21:03.626538   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:21:03.629111   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:03.629506   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:03.629530   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:03.629738   42537 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:21:03.634272   42537 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0722 11:21:03.634472   42537 kubeadm.go:883] updating cluster {Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:21:03.634665   42537 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:21:03.634737   42537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:21:03.688677   42537 command_runner.go:130] > {
	I0722 11:21:03.688702   42537 command_runner.go:130] >   "images": [
	I0722 11:21:03.688708   42537 command_runner.go:130] >     {
	I0722 11:21:03.688719   42537 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 11:21:03.688728   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688737   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 11:21:03.688743   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688749   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688761   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 11:21:03.688772   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 11:21:03.688777   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688784   42537 command_runner.go:130] >       "size": "87165492",
	I0722 11:21:03.688792   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688797   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688805   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688809   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688813   42537 command_runner.go:130] >     },
	I0722 11:21:03.688816   42537 command_runner.go:130] >     {
	I0722 11:21:03.688822   42537 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 11:21:03.688826   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688831   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 11:21:03.688834   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688839   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688846   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 11:21:03.688854   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 11:21:03.688857   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688861   42537 command_runner.go:130] >       "size": "87174707",
	I0722 11:21:03.688864   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688872   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688879   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688884   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688888   42537 command_runner.go:130] >     },
	I0722 11:21:03.688891   42537 command_runner.go:130] >     {
	I0722 11:21:03.688897   42537 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 11:21:03.688901   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688906   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 11:21:03.688909   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688913   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688920   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 11:21:03.688929   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 11:21:03.688933   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688937   42537 command_runner.go:130] >       "size": "1363676",
	I0722 11:21:03.688941   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688948   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688952   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688955   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688959   42537 command_runner.go:130] >     },
	I0722 11:21:03.688962   42537 command_runner.go:130] >     {
	I0722 11:21:03.688968   42537 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 11:21:03.688972   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688976   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 11:21:03.688980   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688984   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688991   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 11:21:03.689002   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 11:21:03.689006   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689009   42537 command_runner.go:130] >       "size": "31470524",
	I0722 11:21:03.689013   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689017   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689021   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689025   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689030   42537 command_runner.go:130] >     },
	I0722 11:21:03.689033   42537 command_runner.go:130] >     {
	I0722 11:21:03.689038   42537 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 11:21:03.689042   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689047   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 11:21:03.689053   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689057   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689066   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 11:21:03.689078   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 11:21:03.689085   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689093   42537 command_runner.go:130] >       "size": "61245718",
	I0722 11:21:03.689100   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689107   42537 command_runner.go:130] >       "username": "nonroot",
	I0722 11:21:03.689113   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689119   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689127   42537 command_runner.go:130] >     },
	I0722 11:21:03.689131   42537 command_runner.go:130] >     {
	I0722 11:21:03.689153   42537 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 11:21:03.689161   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689168   42537 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 11:21:03.689176   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689182   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689194   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 11:21:03.689207   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 11:21:03.689215   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689220   42537 command_runner.go:130] >       "size": "150779692",
	I0722 11:21:03.689225   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689229   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689235   42537 command_runner.go:130] >       },
	I0722 11:21:03.689239   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689245   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689248   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689252   42537 command_runner.go:130] >     },
	I0722 11:21:03.689256   42537 command_runner.go:130] >     {
	I0722 11:21:03.689264   42537 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 11:21:03.689268   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689273   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 11:21:03.689277   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689280   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689289   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 11:21:03.689296   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 11:21:03.689302   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689306   42537 command_runner.go:130] >       "size": "117609954",
	I0722 11:21:03.689312   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689315   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689323   42537 command_runner.go:130] >       },
	I0722 11:21:03.689327   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689331   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689335   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689341   42537 command_runner.go:130] >     },
	I0722 11:21:03.689344   42537 command_runner.go:130] >     {
	I0722 11:21:03.689353   42537 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 11:21:03.689357   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689365   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 11:21:03.689369   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689374   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689388   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 11:21:03.689398   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 11:21:03.689404   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689408   42537 command_runner.go:130] >       "size": "112198984",
	I0722 11:21:03.689414   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689418   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689423   42537 command_runner.go:130] >       },
	I0722 11:21:03.689427   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689431   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689434   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689438   42537 command_runner.go:130] >     },
	I0722 11:21:03.689441   42537 command_runner.go:130] >     {
	I0722 11:21:03.689446   42537 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 11:21:03.689450   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689455   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 11:21:03.689458   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689462   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689469   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 11:21:03.689475   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 11:21:03.689478   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689482   42537 command_runner.go:130] >       "size": "85953945",
	I0722 11:21:03.689486   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689490   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689495   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689499   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689502   42537 command_runner.go:130] >     },
	I0722 11:21:03.689505   42537 command_runner.go:130] >     {
	I0722 11:21:03.689511   42537 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 11:21:03.689515   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689521   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 11:21:03.689525   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689532   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689539   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 11:21:03.689548   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 11:21:03.689553   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689557   42537 command_runner.go:130] >       "size": "63051080",
	I0722 11:21:03.689565   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689570   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689575   42537 command_runner.go:130] >       },
	I0722 11:21:03.689579   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689584   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689589   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689600   42537 command_runner.go:130] >     },
	I0722 11:21:03.689605   42537 command_runner.go:130] >     {
	I0722 11:21:03.689612   42537 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 11:21:03.689621   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689628   42537 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 11:21:03.689636   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689641   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689655   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 11:21:03.689666   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 11:21:03.689673   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689679   42537 command_runner.go:130] >       "size": "750414",
	I0722 11:21:03.689687   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689694   42537 command_runner.go:130] >         "value": "65535"
	I0722 11:21:03.689701   42537 command_runner.go:130] >       },
	I0722 11:21:03.689705   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689709   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689713   42537 command_runner.go:130] >       "pinned": true
	I0722 11:21:03.689718   42537 command_runner.go:130] >     }
	I0722 11:21:03.689722   42537 command_runner.go:130] >   ]
	I0722 11:21:03.689725   42537 command_runner.go:130] > }
	I0722 11:21:03.689924   42537 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:21:03.689938   42537 crio.go:433] Images already preloaded, skipping extraction
	I0722 11:21:03.689981   42537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:21:03.724684   42537 command_runner.go:130] > {
	I0722 11:21:03.724712   42537 command_runner.go:130] >   "images": [
	I0722 11:21:03.724719   42537 command_runner.go:130] >     {
	I0722 11:21:03.724733   42537 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 11:21:03.724741   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724750   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 11:21:03.724757   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724763   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.724774   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 11:21:03.724785   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 11:21:03.724793   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724801   42537 command_runner.go:130] >       "size": "87165492",
	I0722 11:21:03.724812   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.724819   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.724831   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.724842   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.724849   42537 command_runner.go:130] >     },
	I0722 11:21:03.724856   42537 command_runner.go:130] >     {
	I0722 11:21:03.724870   42537 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 11:21:03.724878   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724891   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 11:21:03.724901   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724910   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.724920   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 11:21:03.724929   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 11:21:03.724936   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724941   42537 command_runner.go:130] >       "size": "87174707",
	I0722 11:21:03.724947   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.724956   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.724963   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.724967   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.724971   42537 command_runner.go:130] >     },
	I0722 11:21:03.724977   42537 command_runner.go:130] >     {
	I0722 11:21:03.724985   42537 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 11:21:03.724992   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724997   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 11:21:03.725018   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725026   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725033   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 11:21:03.725043   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 11:21:03.725049   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725054   42537 command_runner.go:130] >       "size": "1363676",
	I0722 11:21:03.725058   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725062   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725067   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725073   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725077   42537 command_runner.go:130] >     },
	I0722 11:21:03.725083   42537 command_runner.go:130] >     {
	I0722 11:21:03.725089   42537 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 11:21:03.725100   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725108   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 11:21:03.725115   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725119   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725126   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 11:21:03.725146   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 11:21:03.725152   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725157   42537 command_runner.go:130] >       "size": "31470524",
	I0722 11:21:03.725163   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725167   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725174   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725179   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725185   42537 command_runner.go:130] >     },
	I0722 11:21:03.725189   42537 command_runner.go:130] >     {
	I0722 11:21:03.725196   42537 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 11:21:03.725203   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725208   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 11:21:03.725215   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725219   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725229   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 11:21:03.725239   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 11:21:03.725245   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725250   42537 command_runner.go:130] >       "size": "61245718",
	I0722 11:21:03.725257   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725262   42537 command_runner.go:130] >       "username": "nonroot",
	I0722 11:21:03.725269   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725273   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725279   42537 command_runner.go:130] >     },
	I0722 11:21:03.725283   42537 command_runner.go:130] >     {
	I0722 11:21:03.725292   42537 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 11:21:03.725298   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725302   42537 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 11:21:03.725308   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725313   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725326   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 11:21:03.725337   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 11:21:03.725344   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725348   42537 command_runner.go:130] >       "size": "150779692",
	I0722 11:21:03.725355   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725359   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725366   42537 command_runner.go:130] >       },
	I0722 11:21:03.725371   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725377   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725382   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725388   42537 command_runner.go:130] >     },
	I0722 11:21:03.725392   42537 command_runner.go:130] >     {
	I0722 11:21:03.725400   42537 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 11:21:03.725407   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725412   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 11:21:03.725418   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725423   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725433   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 11:21:03.725442   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 11:21:03.725448   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725453   42537 command_runner.go:130] >       "size": "117609954",
	I0722 11:21:03.725459   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725463   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725467   42537 command_runner.go:130] >       },
	I0722 11:21:03.725473   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725484   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725491   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725495   42537 command_runner.go:130] >     },
	I0722 11:21:03.725501   42537 command_runner.go:130] >     {
	I0722 11:21:03.725507   42537 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 11:21:03.725513   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725519   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 11:21:03.725525   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725529   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725551   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 11:21:03.725562   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 11:21:03.725570   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725574   42537 command_runner.go:130] >       "size": "112198984",
	I0722 11:21:03.725581   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725585   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725591   42537 command_runner.go:130] >       },
	I0722 11:21:03.725596   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725602   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725606   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725612   42537 command_runner.go:130] >     },
	I0722 11:21:03.725618   42537 command_runner.go:130] >     {
	I0722 11:21:03.725632   42537 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 11:21:03.725643   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725651   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 11:21:03.725660   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725667   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725682   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 11:21:03.725697   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 11:21:03.725707   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725714   42537 command_runner.go:130] >       "size": "85953945",
	I0722 11:21:03.725725   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725732   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725742   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725750   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725756   42537 command_runner.go:130] >     },
	I0722 11:21:03.725760   42537 command_runner.go:130] >     {
	I0722 11:21:03.725779   42537 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 11:21:03.725787   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725792   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 11:21:03.725798   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725803   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725812   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 11:21:03.725820   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 11:21:03.725826   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725830   42537 command_runner.go:130] >       "size": "63051080",
	I0722 11:21:03.725836   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725841   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725848   42537 command_runner.go:130] >       },
	I0722 11:21:03.725859   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725866   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725871   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725877   42537 command_runner.go:130] >     },
	I0722 11:21:03.725881   42537 command_runner.go:130] >     {
	I0722 11:21:03.725889   42537 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 11:21:03.725895   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725900   42537 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 11:21:03.725907   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725911   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725920   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 11:21:03.725929   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 11:21:03.725935   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725939   42537 command_runner.go:130] >       "size": "750414",
	I0722 11:21:03.725946   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725950   42537 command_runner.go:130] >         "value": "65535"
	I0722 11:21:03.725956   42537 command_runner.go:130] >       },
	I0722 11:21:03.725960   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725967   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725971   42537 command_runner.go:130] >       "pinned": true
	I0722 11:21:03.725977   42537 command_runner.go:130] >     }
	I0722 11:21:03.725980   42537 command_runner.go:130] >   ]
	I0722 11:21:03.725986   42537 command_runner.go:130] > }
	I0722 11:21:03.726119   42537 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:21:03.726131   42537 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:21:03.726137   42537 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.30.3 crio true true} ...
	I0722 11:21:03.726247   42537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:21:03.726313   42537 ssh_runner.go:195] Run: crio config
	I0722 11:21:03.759944   42537 command_runner.go:130] ! time="2024-07-22 11:21:03.724083465Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0722 11:21:03.766245   42537 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0722 11:21:03.778388   42537 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0722 11:21:03.778406   42537 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0722 11:21:03.778412   42537 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0722 11:21:03.778415   42537 command_runner.go:130] > #
	I0722 11:21:03.778422   42537 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0722 11:21:03.778428   42537 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0722 11:21:03.778433   42537 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0722 11:21:03.778441   42537 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0722 11:21:03.778446   42537 command_runner.go:130] > # reload'.
	I0722 11:21:03.778455   42537 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0722 11:21:03.778465   42537 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0722 11:21:03.778480   42537 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0722 11:21:03.778489   42537 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0722 11:21:03.778495   42537 command_runner.go:130] > [crio]
	I0722 11:21:03.778504   42537 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0722 11:21:03.778511   42537 command_runner.go:130] > # containers images, in this directory.
	I0722 11:21:03.778522   42537 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0722 11:21:03.778533   42537 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0722 11:21:03.778544   42537 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0722 11:21:03.778553   42537 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0722 11:21:03.778559   42537 command_runner.go:130] > # imagestore = ""
	I0722 11:21:03.778568   42537 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0722 11:21:03.778574   42537 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0722 11:21:03.778580   42537 command_runner.go:130] > storage_driver = "overlay"
	I0722 11:21:03.778587   42537 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0722 11:21:03.778593   42537 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0722 11:21:03.778597   42537 command_runner.go:130] > storage_option = [
	I0722 11:21:03.778602   42537 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0722 11:21:03.778608   42537 command_runner.go:130] > ]
	I0722 11:21:03.778614   42537 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0722 11:21:03.778621   42537 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0722 11:21:03.778631   42537 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0722 11:21:03.778639   42537 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0722 11:21:03.778650   42537 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0722 11:21:03.778657   42537 command_runner.go:130] > # always happen on a node reboot
	I0722 11:21:03.778665   42537 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0722 11:21:03.778678   42537 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0722 11:21:03.778690   42537 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0722 11:21:03.778697   42537 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0722 11:21:03.778708   42537 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0722 11:21:03.778720   42537 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0722 11:21:03.778737   42537 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0722 11:21:03.778743   42537 command_runner.go:130] > # internal_wipe = true
	I0722 11:21:03.778752   42537 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0722 11:21:03.778759   42537 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0722 11:21:03.778763   42537 command_runner.go:130] > # internal_repair = false
	I0722 11:21:03.778770   42537 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0722 11:21:03.778776   42537 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0722 11:21:03.778783   42537 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0722 11:21:03.778788   42537 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0722 11:21:03.778796   42537 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0722 11:21:03.778799   42537 command_runner.go:130] > [crio.api]
	I0722 11:21:03.778810   42537 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0722 11:21:03.778817   42537 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0722 11:21:03.778822   42537 command_runner.go:130] > # IP address on which the stream server will listen.
	I0722 11:21:03.778828   42537 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0722 11:21:03.778834   42537 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0722 11:21:03.778842   42537 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0722 11:21:03.778846   42537 command_runner.go:130] > # stream_port = "0"
	I0722 11:21:03.778852   42537 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0722 11:21:03.778861   42537 command_runner.go:130] > # stream_enable_tls = false
	I0722 11:21:03.778867   42537 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0722 11:21:03.778873   42537 command_runner.go:130] > # stream_idle_timeout = ""
	I0722 11:21:03.778880   42537 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0722 11:21:03.778887   42537 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0722 11:21:03.778891   42537 command_runner.go:130] > # minutes.
	I0722 11:21:03.778895   42537 command_runner.go:130] > # stream_tls_cert = ""
	I0722 11:21:03.778902   42537 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0722 11:21:03.778907   42537 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0722 11:21:03.778913   42537 command_runner.go:130] > # stream_tls_key = ""
	I0722 11:21:03.778919   42537 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0722 11:21:03.778927   42537 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0722 11:21:03.778941   42537 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0722 11:21:03.778947   42537 command_runner.go:130] > # stream_tls_ca = ""
	I0722 11:21:03.778954   42537 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 11:21:03.778960   42537 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0722 11:21:03.778967   42537 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 11:21:03.778974   42537 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0722 11:21:03.778980   42537 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0722 11:21:03.778987   42537 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0722 11:21:03.778991   42537 command_runner.go:130] > [crio.runtime]
	I0722 11:21:03.778997   42537 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0722 11:21:03.779004   42537 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0722 11:21:03.779008   42537 command_runner.go:130] > # "nofile=1024:2048"
	I0722 11:21:03.779016   42537 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0722 11:21:03.779022   42537 command_runner.go:130] > # default_ulimits = [
	I0722 11:21:03.779026   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779034   42537 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0722 11:21:03.779040   42537 command_runner.go:130] > # no_pivot = false
	I0722 11:21:03.779045   42537 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0722 11:21:03.779053   42537 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0722 11:21:03.779060   42537 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0722 11:21:03.779066   42537 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0722 11:21:03.779073   42537 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0722 11:21:03.779079   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 11:21:03.779086   42537 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0722 11:21:03.779090   42537 command_runner.go:130] > # Cgroup setting for conmon
	I0722 11:21:03.779098   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0722 11:21:03.779104   42537 command_runner.go:130] > conmon_cgroup = "pod"
	I0722 11:21:03.779118   42537 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0722 11:21:03.779125   42537 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0722 11:21:03.779131   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 11:21:03.779137   42537 command_runner.go:130] > conmon_env = [
	I0722 11:21:03.779142   42537 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 11:21:03.779147   42537 command_runner.go:130] > ]
	I0722 11:21:03.779152   42537 command_runner.go:130] > # Additional environment variables to set for all the
	I0722 11:21:03.779157   42537 command_runner.go:130] > # containers. These are overridden if set in the
	I0722 11:21:03.779162   42537 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0722 11:21:03.779168   42537 command_runner.go:130] > # default_env = [
	I0722 11:21:03.779171   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779178   42537 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0722 11:21:03.779185   42537 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0722 11:21:03.779191   42537 command_runner.go:130] > # selinux = false
	I0722 11:21:03.779197   42537 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0722 11:21:03.779205   42537 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0722 11:21:03.779211   42537 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0722 11:21:03.779217   42537 command_runner.go:130] > # seccomp_profile = ""
	I0722 11:21:03.779222   42537 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0722 11:21:03.779230   42537 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0722 11:21:03.779237   42537 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0722 11:21:03.779242   42537 command_runner.go:130] > # which might increase security.
	I0722 11:21:03.779248   42537 command_runner.go:130] > # This option is currently deprecated,
	I0722 11:21:03.779253   42537 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0722 11:21:03.779260   42537 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0722 11:21:03.779266   42537 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0722 11:21:03.779273   42537 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0722 11:21:03.779282   42537 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0722 11:21:03.779288   42537 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0722 11:21:03.779295   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779300   42537 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0722 11:21:03.779307   42537 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0722 11:21:03.779311   42537 command_runner.go:130] > # the cgroup blockio controller.
	I0722 11:21:03.779317   42537 command_runner.go:130] > # blockio_config_file = ""
	I0722 11:21:03.779323   42537 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0722 11:21:03.779329   42537 command_runner.go:130] > # blockio parameters.
	I0722 11:21:03.779333   42537 command_runner.go:130] > # blockio_reload = false
	I0722 11:21:03.779342   42537 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0722 11:21:03.779348   42537 command_runner.go:130] > # irqbalance daemon.
	I0722 11:21:03.779353   42537 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0722 11:21:03.779361   42537 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0722 11:21:03.779367   42537 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0722 11:21:03.779375   42537 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0722 11:21:03.779383   42537 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0722 11:21:03.779390   42537 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0722 11:21:03.779397   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779401   42537 command_runner.go:130] > # rdt_config_file = ""
	I0722 11:21:03.779406   42537 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0722 11:21:03.779412   42537 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0722 11:21:03.779426   42537 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0722 11:21:03.779432   42537 command_runner.go:130] > # separate_pull_cgroup = ""
	I0722 11:21:03.779438   42537 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0722 11:21:03.779446   42537 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0722 11:21:03.779452   42537 command_runner.go:130] > # will be added.
	I0722 11:21:03.779456   42537 command_runner.go:130] > # default_capabilities = [
	I0722 11:21:03.779461   42537 command_runner.go:130] > # 	"CHOWN",
	I0722 11:21:03.779465   42537 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0722 11:21:03.779470   42537 command_runner.go:130] > # 	"FSETID",
	I0722 11:21:03.779474   42537 command_runner.go:130] > # 	"FOWNER",
	I0722 11:21:03.779479   42537 command_runner.go:130] > # 	"SETGID",
	I0722 11:21:03.779483   42537 command_runner.go:130] > # 	"SETUID",
	I0722 11:21:03.779487   42537 command_runner.go:130] > # 	"SETPCAP",
	I0722 11:21:03.779493   42537 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0722 11:21:03.779496   42537 command_runner.go:130] > # 	"KILL",
	I0722 11:21:03.779500   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779507   42537 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0722 11:21:03.779515   42537 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0722 11:21:03.779523   42537 command_runner.go:130] > # add_inheritable_capabilities = false
	I0722 11:21:03.779529   42537 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0722 11:21:03.779536   42537 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 11:21:03.779542   42537 command_runner.go:130] > default_sysctls = [
	I0722 11:21:03.779547   42537 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0722 11:21:03.779552   42537 command_runner.go:130] > ]
	I0722 11:21:03.779557   42537 command_runner.go:130] > # List of devices on the host that a
	I0722 11:21:03.779564   42537 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0722 11:21:03.779568   42537 command_runner.go:130] > # allowed_devices = [
	I0722 11:21:03.779574   42537 command_runner.go:130] > # 	"/dev/fuse",
	I0722 11:21:03.779577   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779586   42537 command_runner.go:130] > # List of additional devices. specified as
	I0722 11:21:03.779595   42537 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0722 11:21:03.779602   42537 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0722 11:21:03.779608   42537 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 11:21:03.779614   42537 command_runner.go:130] > # additional_devices = [
	I0722 11:21:03.779618   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779628   42537 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0722 11:21:03.779637   42537 command_runner.go:130] > # cdi_spec_dirs = [
	I0722 11:21:03.779642   42537 command_runner.go:130] > # 	"/etc/cdi",
	I0722 11:21:03.779649   42537 command_runner.go:130] > # 	"/var/run/cdi",
	I0722 11:21:03.779654   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779663   42537 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0722 11:21:03.779675   42537 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0722 11:21:03.779684   42537 command_runner.go:130] > # Defaults to false.
	I0722 11:21:03.779692   42537 command_runner.go:130] > # device_ownership_from_security_context = false
	I0722 11:21:03.779704   42537 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0722 11:21:03.779716   42537 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0722 11:21:03.779726   42537 command_runner.go:130] > # hooks_dir = [
	I0722 11:21:03.779736   42537 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0722 11:21:03.779741   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779749   42537 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0722 11:21:03.779755   42537 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0722 11:21:03.779763   42537 command_runner.go:130] > # its default mounts from the following two files:
	I0722 11:21:03.779766   42537 command_runner.go:130] > #
	I0722 11:21:03.779776   42537 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0722 11:21:03.779785   42537 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0722 11:21:03.779792   42537 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0722 11:21:03.779797   42537 command_runner.go:130] > #
	I0722 11:21:03.779803   42537 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0722 11:21:03.779811   42537 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0722 11:21:03.779819   42537 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0722 11:21:03.779823   42537 command_runner.go:130] > #      only add mounts it finds in this file.
	I0722 11:21:03.779828   42537 command_runner.go:130] > #
	I0722 11:21:03.779832   42537 command_runner.go:130] > # default_mounts_file = ""
	I0722 11:21:03.779839   42537 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0722 11:21:03.779845   42537 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0722 11:21:03.779851   42537 command_runner.go:130] > pids_limit = 1024
	I0722 11:21:03.779858   42537 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0722 11:21:03.779868   42537 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0722 11:21:03.779877   42537 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0722 11:21:03.779886   42537 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0722 11:21:03.779893   42537 command_runner.go:130] > # log_size_max = -1
	I0722 11:21:03.779899   42537 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0722 11:21:03.779906   42537 command_runner.go:130] > # log_to_journald = false
	I0722 11:21:03.779912   42537 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0722 11:21:03.779918   42537 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0722 11:21:03.779923   42537 command_runner.go:130] > # Path to directory for container attach sockets.
	I0722 11:21:03.779930   42537 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0722 11:21:03.779935   42537 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0722 11:21:03.779941   42537 command_runner.go:130] > # bind_mount_prefix = ""
	I0722 11:21:03.779946   42537 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0722 11:21:03.779952   42537 command_runner.go:130] > # read_only = false
	I0722 11:21:03.779958   42537 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0722 11:21:03.779966   42537 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0722 11:21:03.779971   42537 command_runner.go:130] > # live configuration reload.
	I0722 11:21:03.779976   42537 command_runner.go:130] > # log_level = "info"
	I0722 11:21:03.779980   42537 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0722 11:21:03.779987   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779991   42537 command_runner.go:130] > # log_filter = ""
	I0722 11:21:03.779998   42537 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0722 11:21:03.780008   42537 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0722 11:21:03.780014   42537 command_runner.go:130] > # separated by comma.
	I0722 11:21:03.780021   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780026   42537 command_runner.go:130] > # uid_mappings = ""
	I0722 11:21:03.780032   42537 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0722 11:21:03.780039   42537 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0722 11:21:03.780044   42537 command_runner.go:130] > # separated by comma.
	I0722 11:21:03.780051   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780057   42537 command_runner.go:130] > # gid_mappings = ""
	I0722 11:21:03.780063   42537 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0722 11:21:03.780070   42537 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 11:21:03.780076   42537 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 11:21:03.780085   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780091   42537 command_runner.go:130] > # minimum_mappable_uid = -1
	I0722 11:21:03.780097   42537 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0722 11:21:03.780105   42537 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 11:21:03.780116   42537 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 11:21:03.780124   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780130   42537 command_runner.go:130] > # minimum_mappable_gid = -1
	I0722 11:21:03.780136   42537 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0722 11:21:03.780144   42537 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0722 11:21:03.780151   42537 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0722 11:21:03.780155   42537 command_runner.go:130] > # ctr_stop_timeout = 30
	I0722 11:21:03.780161   42537 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0722 11:21:03.780168   42537 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0722 11:21:03.780173   42537 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0722 11:21:03.780180   42537 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0722 11:21:03.780183   42537 command_runner.go:130] > drop_infra_ctr = false
	I0722 11:21:03.780191   42537 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0722 11:21:03.780198   42537 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0722 11:21:03.780205   42537 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0722 11:21:03.780211   42537 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0722 11:21:03.780218   42537 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0722 11:21:03.780225   42537 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0722 11:21:03.780233   42537 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0722 11:21:03.780237   42537 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0722 11:21:03.780243   42537 command_runner.go:130] > # shared_cpuset = ""
	I0722 11:21:03.780250   42537 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0722 11:21:03.780256   42537 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0722 11:21:03.780261   42537 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0722 11:21:03.780270   42537 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0722 11:21:03.780276   42537 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0722 11:21:03.780281   42537 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0722 11:21:03.780289   42537 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0722 11:21:03.780295   42537 command_runner.go:130] > # enable_criu_support = false
	I0722 11:21:03.780299   42537 command_runner.go:130] > # Enable/disable the generation of the container,
	I0722 11:21:03.780307   42537 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0722 11:21:03.780311   42537 command_runner.go:130] > # enable_pod_events = false
	I0722 11:21:03.780317   42537 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 11:21:03.780325   42537 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 11:21:03.780332   42537 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0722 11:21:03.780336   42537 command_runner.go:130] > # default_runtime = "runc"
	I0722 11:21:03.780342   42537 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0722 11:21:03.780349   42537 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0722 11:21:03.780359   42537 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0722 11:21:03.780366   42537 command_runner.go:130] > # creation as a file is not desired either.
	I0722 11:21:03.780374   42537 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0722 11:21:03.780392   42537 command_runner.go:130] > # the hostname is being managed dynamically.
	I0722 11:21:03.780400   42537 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0722 11:21:03.780408   42537 command_runner.go:130] > # ]
	I0722 11:21:03.780413   42537 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0722 11:21:03.780421   42537 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0722 11:21:03.780428   42537 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0722 11:21:03.780435   42537 command_runner.go:130] > # Each entry in the table should follow the format:
	I0722 11:21:03.780438   42537 command_runner.go:130] > #
	I0722 11:21:03.780445   42537 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0722 11:21:03.780450   42537 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0722 11:21:03.780475   42537 command_runner.go:130] > # runtime_type = "oci"
	I0722 11:21:03.780482   42537 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0722 11:21:03.780487   42537 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0722 11:21:03.780493   42537 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0722 11:21:03.780498   42537 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0722 11:21:03.780504   42537 command_runner.go:130] > # monitor_env = []
	I0722 11:21:03.780509   42537 command_runner.go:130] > # privileged_without_host_devices = false
	I0722 11:21:03.780515   42537 command_runner.go:130] > # allowed_annotations = []
	I0722 11:21:03.780520   42537 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0722 11:21:03.780525   42537 command_runner.go:130] > # Where:
	I0722 11:21:03.780531   42537 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0722 11:21:03.780538   42537 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0722 11:21:03.780547   42537 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0722 11:21:03.780553   42537 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0722 11:21:03.780559   42537 command_runner.go:130] > #   in $PATH.
	I0722 11:21:03.780565   42537 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0722 11:21:03.780571   42537 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0722 11:21:03.780577   42537 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0722 11:21:03.780582   42537 command_runner.go:130] > #   state.
	I0722 11:21:03.780588   42537 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0722 11:21:03.780596   42537 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0722 11:21:03.780604   42537 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0722 11:21:03.780611   42537 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0722 11:21:03.780618   42537 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0722 11:21:03.780631   42537 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0722 11:21:03.780641   42537 command_runner.go:130] > #   The currently recognized values are:
	I0722 11:21:03.780651   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0722 11:21:03.780664   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0722 11:21:03.780675   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0722 11:21:03.780685   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0722 11:21:03.780699   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0722 11:21:03.780709   42537 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0722 11:21:03.780717   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0722 11:21:03.780725   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0722 11:21:03.780732   42537 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0722 11:21:03.780740   42537 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0722 11:21:03.780745   42537 command_runner.go:130] > #   deprecated option "conmon".
	I0722 11:21:03.780754   42537 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0722 11:21:03.780760   42537 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0722 11:21:03.780766   42537 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0722 11:21:03.780774   42537 command_runner.go:130] > #   should be moved to the container's cgroup
	I0722 11:21:03.780781   42537 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0722 11:21:03.780787   42537 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0722 11:21:03.780793   42537 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0722 11:21:03.780800   42537 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0722 11:21:03.780803   42537 command_runner.go:130] > #
	I0722 11:21:03.780810   42537 command_runner.go:130] > # Using the seccomp notifier feature:
	I0722 11:21:03.780813   42537 command_runner.go:130] > #
	I0722 11:21:03.780819   42537 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0722 11:21:03.780827   42537 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0722 11:21:03.780832   42537 command_runner.go:130] > #
	I0722 11:21:03.780839   42537 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0722 11:21:03.780848   42537 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0722 11:21:03.780853   42537 command_runner.go:130] > #
	I0722 11:21:03.780860   42537 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0722 11:21:03.780866   42537 command_runner.go:130] > # feature.
	I0722 11:21:03.780869   42537 command_runner.go:130] > #
	I0722 11:21:03.780875   42537 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0722 11:21:03.780881   42537 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0722 11:21:03.780889   42537 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0722 11:21:03.780897   42537 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0722 11:21:03.780903   42537 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0722 11:21:03.780908   42537 command_runner.go:130] > #
	I0722 11:21:03.780913   42537 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0722 11:21:03.780921   42537 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0722 11:21:03.780926   42537 command_runner.go:130] > #
	I0722 11:21:03.780932   42537 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0722 11:21:03.780939   42537 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0722 11:21:03.780945   42537 command_runner.go:130] > #
	I0722 11:21:03.780950   42537 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0722 11:21:03.780958   42537 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0722 11:21:03.780963   42537 command_runner.go:130] > # limitation.
	I0722 11:21:03.780968   42537 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0722 11:21:03.780975   42537 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0722 11:21:03.780979   42537 command_runner.go:130] > runtime_type = "oci"
	I0722 11:21:03.780985   42537 command_runner.go:130] > runtime_root = "/run/runc"
	I0722 11:21:03.780989   42537 command_runner.go:130] > runtime_config_path = ""
	I0722 11:21:03.780996   42537 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0722 11:21:03.781000   42537 command_runner.go:130] > monitor_cgroup = "pod"
	I0722 11:21:03.781006   42537 command_runner.go:130] > monitor_exec_cgroup = ""
	I0722 11:21:03.781010   42537 command_runner.go:130] > monitor_env = [
	I0722 11:21:03.781017   42537 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 11:21:03.781022   42537 command_runner.go:130] > ]
	I0722 11:21:03.781027   42537 command_runner.go:130] > privileged_without_host_devices = false
	I0722 11:21:03.781036   42537 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0722 11:21:03.781044   42537 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0722 11:21:03.781050   42537 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0722 11:21:03.781059   42537 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0722 11:21:03.781066   42537 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0722 11:21:03.781073   42537 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0722 11:21:03.781081   42537 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0722 11:21:03.781090   42537 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0722 11:21:03.781095   42537 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0722 11:21:03.781102   42537 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0722 11:21:03.781105   42537 command_runner.go:130] > # Example:
	I0722 11:21:03.781112   42537 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0722 11:21:03.781116   42537 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0722 11:21:03.781121   42537 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0722 11:21:03.781125   42537 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0722 11:21:03.781129   42537 command_runner.go:130] > # cpuset = 0
	I0722 11:21:03.781132   42537 command_runner.go:130] > # cpushares = "0-1"
	I0722 11:21:03.781135   42537 command_runner.go:130] > # Where:
	I0722 11:21:03.781139   42537 command_runner.go:130] > # The workload name is workload-type.
	I0722 11:21:03.781145   42537 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0722 11:21:03.781150   42537 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0722 11:21:03.781155   42537 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0722 11:21:03.781163   42537 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0722 11:21:03.781168   42537 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0722 11:21:03.781172   42537 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0722 11:21:03.781178   42537 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0722 11:21:03.781182   42537 command_runner.go:130] > # Default value is set to true
	I0722 11:21:03.781186   42537 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0722 11:21:03.781191   42537 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0722 11:21:03.781196   42537 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0722 11:21:03.781200   42537 command_runner.go:130] > # Default value is set to 'false'
	I0722 11:21:03.781204   42537 command_runner.go:130] > # disable_hostport_mapping = false
	I0722 11:21:03.781210   42537 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0722 11:21:03.781213   42537 command_runner.go:130] > #
	I0722 11:21:03.781218   42537 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0722 11:21:03.781223   42537 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0722 11:21:03.781229   42537 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0722 11:21:03.781234   42537 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0722 11:21:03.781239   42537 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0722 11:21:03.781243   42537 command_runner.go:130] > [crio.image]
	I0722 11:21:03.781248   42537 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0722 11:21:03.781254   42537 command_runner.go:130] > # default_transport = "docker://"
	I0722 11:21:03.781260   42537 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0722 11:21:03.781268   42537 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0722 11:21:03.781272   42537 command_runner.go:130] > # global_auth_file = ""
	I0722 11:21:03.781277   42537 command_runner.go:130] > # The image used to instantiate infra containers.
	I0722 11:21:03.781283   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.781288   42537 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0722 11:21:03.781297   42537 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0722 11:21:03.781302   42537 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0722 11:21:03.781309   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.781313   42537 command_runner.go:130] > # pause_image_auth_file = ""
	I0722 11:21:03.781320   42537 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0722 11:21:03.781329   42537 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0722 11:21:03.781337   42537 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0722 11:21:03.781344   42537 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0722 11:21:03.781350   42537 command_runner.go:130] > # pause_command = "/pause"
	I0722 11:21:03.781356   42537 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0722 11:21:03.781364   42537 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0722 11:21:03.781370   42537 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0722 11:21:03.781378   42537 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0722 11:21:03.781386   42537 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0722 11:21:03.781394   42537 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0722 11:21:03.781400   42537 command_runner.go:130] > # pinned_images = [
	I0722 11:21:03.781403   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781412   42537 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0722 11:21:03.781420   42537 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0722 11:21:03.781426   42537 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0722 11:21:03.781434   42537 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0722 11:21:03.781441   42537 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0722 11:21:03.781445   42537 command_runner.go:130] > # signature_policy = ""
	I0722 11:21:03.781450   42537 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0722 11:21:03.781459   42537 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0722 11:21:03.781465   42537 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0722 11:21:03.781473   42537 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0722 11:21:03.781480   42537 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0722 11:21:03.781485   42537 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0722 11:21:03.781492   42537 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0722 11:21:03.781500   42537 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0722 11:21:03.781505   42537 command_runner.go:130] > # changing them here.
	I0722 11:21:03.781509   42537 command_runner.go:130] > # insecure_registries = [
	I0722 11:21:03.781514   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781521   42537 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0722 11:21:03.781527   42537 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0722 11:21:03.781536   42537 command_runner.go:130] > # image_volumes = "mkdir"
	I0722 11:21:03.781544   42537 command_runner.go:130] > # Temporary directory to use for storing big files
	I0722 11:21:03.781548   42537 command_runner.go:130] > # big_files_temporary_dir = ""
	I0722 11:21:03.781556   42537 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0722 11:21:03.781562   42537 command_runner.go:130] > # CNI plugins.
	I0722 11:21:03.781566   42537 command_runner.go:130] > [crio.network]
	I0722 11:21:03.781573   42537 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0722 11:21:03.781580   42537 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0722 11:21:03.781584   42537 command_runner.go:130] > # cni_default_network = ""
	I0722 11:21:03.781592   42537 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0722 11:21:03.781598   42537 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0722 11:21:03.781604   42537 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0722 11:21:03.781609   42537 command_runner.go:130] > # plugin_dirs = [
	I0722 11:21:03.781612   42537 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0722 11:21:03.781617   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781625   42537 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0722 11:21:03.781634   42537 command_runner.go:130] > [crio.metrics]
	I0722 11:21:03.781642   42537 command_runner.go:130] > # Globally enable or disable metrics support.
	I0722 11:21:03.781652   42537 command_runner.go:130] > enable_metrics = true
	I0722 11:21:03.781658   42537 command_runner.go:130] > # Specify enabled metrics collectors.
	I0722 11:21:03.781668   42537 command_runner.go:130] > # Per default all metrics are enabled.
	I0722 11:21:03.781682   42537 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0722 11:21:03.781694   42537 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0722 11:21:03.781705   42537 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0722 11:21:03.781715   42537 command_runner.go:130] > # metrics_collectors = [
	I0722 11:21:03.781720   42537 command_runner.go:130] > # 	"operations",
	I0722 11:21:03.781729   42537 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0722 11:21:03.781739   42537 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0722 11:21:03.781749   42537 command_runner.go:130] > # 	"operations_errors",
	I0722 11:21:03.781758   42537 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0722 11:21:03.781766   42537 command_runner.go:130] > # 	"image_pulls_by_name",
	I0722 11:21:03.781770   42537 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0722 11:21:03.781776   42537 command_runner.go:130] > # 	"image_pulls_failures",
	I0722 11:21:03.781780   42537 command_runner.go:130] > # 	"image_pulls_successes",
	I0722 11:21:03.781786   42537 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0722 11:21:03.781790   42537 command_runner.go:130] > # 	"image_layer_reuse",
	I0722 11:21:03.781797   42537 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0722 11:21:03.781800   42537 command_runner.go:130] > # 	"containers_oom_total",
	I0722 11:21:03.781804   42537 command_runner.go:130] > # 	"containers_oom",
	I0722 11:21:03.781810   42537 command_runner.go:130] > # 	"processes_defunct",
	I0722 11:21:03.781814   42537 command_runner.go:130] > # 	"operations_total",
	I0722 11:21:03.781821   42537 command_runner.go:130] > # 	"operations_latency_seconds",
	I0722 11:21:03.781825   42537 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0722 11:21:03.781832   42537 command_runner.go:130] > # 	"operations_errors_total",
	I0722 11:21:03.781836   42537 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0722 11:21:03.781842   42537 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0722 11:21:03.781846   42537 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0722 11:21:03.781852   42537 command_runner.go:130] > # 	"image_pulls_success_total",
	I0722 11:21:03.781857   42537 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0722 11:21:03.781864   42537 command_runner.go:130] > # 	"containers_oom_count_total",
	I0722 11:21:03.781869   42537 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0722 11:21:03.781875   42537 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0722 11:21:03.781879   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781885   42537 command_runner.go:130] > # The port on which the metrics server will listen.
	I0722 11:21:03.781891   42537 command_runner.go:130] > # metrics_port = 9090
	I0722 11:21:03.781896   42537 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0722 11:21:03.781903   42537 command_runner.go:130] > # metrics_socket = ""
	I0722 11:21:03.781908   42537 command_runner.go:130] > # The certificate for the secure metrics server.
	I0722 11:21:03.781917   42537 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0722 11:21:03.781925   42537 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0722 11:21:03.781932   42537 command_runner.go:130] > # certificate on any modification event.
	I0722 11:21:03.781936   42537 command_runner.go:130] > # metrics_cert = ""
	I0722 11:21:03.781941   42537 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0722 11:21:03.781948   42537 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0722 11:21:03.781952   42537 command_runner.go:130] > # metrics_key = ""
	I0722 11:21:03.781959   42537 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0722 11:21:03.781962   42537 command_runner.go:130] > [crio.tracing]
	I0722 11:21:03.781969   42537 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0722 11:21:03.781973   42537 command_runner.go:130] > # enable_tracing = false
	I0722 11:21:03.781980   42537 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0722 11:21:03.781985   42537 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0722 11:21:03.781993   42537 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0722 11:21:03.781999   42537 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0722 11:21:03.782004   42537 command_runner.go:130] > # CRI-O NRI configuration.
	I0722 11:21:03.782010   42537 command_runner.go:130] > [crio.nri]
	I0722 11:21:03.782014   42537 command_runner.go:130] > # Globally enable or disable NRI.
	I0722 11:21:03.782020   42537 command_runner.go:130] > # enable_nri = false
	I0722 11:21:03.782024   42537 command_runner.go:130] > # NRI socket to listen on.
	I0722 11:21:03.782030   42537 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0722 11:21:03.782035   42537 command_runner.go:130] > # NRI plugin directory to use.
	I0722 11:21:03.782041   42537 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0722 11:21:03.782045   42537 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0722 11:21:03.782050   42537 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0722 11:21:03.782057   42537 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0722 11:21:03.782063   42537 command_runner.go:130] > # nri_disable_connections = false
	I0722 11:21:03.782068   42537 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0722 11:21:03.782074   42537 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0722 11:21:03.782079   42537 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0722 11:21:03.782085   42537 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0722 11:21:03.782092   42537 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0722 11:21:03.782097   42537 command_runner.go:130] > [crio.stats]
	I0722 11:21:03.782103   42537 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0722 11:21:03.782113   42537 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0722 11:21:03.782119   42537 command_runner.go:130] > # stats_collection_period = 0
	I0722 11:21:03.782216   42537 cni.go:84] Creating CNI manager for ""
	I0722 11:21:03.782225   42537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 11:21:03.782235   42537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:21:03.782252   42537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025157 NodeName:multinode-025157 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:21:03.782373   42537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025157"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:21:03.782425   42537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:21:03.792343   42537 command_runner.go:130] > kubeadm
	I0722 11:21:03.792361   42537 command_runner.go:130] > kubectl
	I0722 11:21:03.792367   42537 command_runner.go:130] > kubelet
	I0722 11:21:03.792397   42537 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:21:03.792448   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:21:03.801616   42537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0722 11:21:03.817663   42537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:21:03.833426   42537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0722 11:21:03.849475   42537 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0722 11:21:03.853170   42537 command_runner.go:130] > 192.168.39.158	control-plane.minikube.internal
	I0722 11:21:03.853289   42537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:21:03.987370   42537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:21:04.001972   42537 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157 for IP: 192.168.39.158
	I0722 11:21:04.001989   42537 certs.go:194] generating shared ca certs ...
	I0722 11:21:04.002003   42537 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:21:04.002173   42537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:21:04.002219   42537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:21:04.002229   42537 certs.go:256] generating profile certs ...
	I0722 11:21:04.002297   42537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/client.key
	I0722 11:21:04.002352   42537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key.268a156f
	I0722 11:21:04.002387   42537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key
	I0722 11:21:04.002397   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 11:21:04.002410   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 11:21:04.002420   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 11:21:04.002434   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 11:21:04.002451   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 11:21:04.002464   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 11:21:04.002476   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 11:21:04.002487   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 11:21:04.002535   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:21:04.002563   42537 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:21:04.002573   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:21:04.002592   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:21:04.002617   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:21:04.002636   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:21:04.002674   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:21:04.002697   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.002710   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.002721   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.003238   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:21:04.027286   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:21:04.050580   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:21:04.074001   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:21:04.097524   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:21:04.121581   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:21:04.145303   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:21:04.168560   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:21:04.192945   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:21:04.217786   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:21:04.242586   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:21:04.267099   42537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:21:04.285052   42537 ssh_runner.go:195] Run: openssl version
	I0722 11:21:04.290892   42537 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 11:21:04.290950   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:21:04.301513   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305743   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305870   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305911   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.311308   42537 command_runner.go:130] > 3ec20f2e
	I0722 11:21:04.311509   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:21:04.320716   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:21:04.330987   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335478   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335497   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335535   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.341576   42537 command_runner.go:130] > b5213941
	I0722 11:21:04.341711   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:21:04.351031   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:21:04.361549   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365746   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365908   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365936   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.371624   42537 command_runner.go:130] > 51391683
	I0722 11:21:04.371686   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:21:04.380575   42537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:21:04.385341   42537 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:21:04.385360   42537 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0722 11:21:04.385368   42537 command_runner.go:130] > Device: 253,1	Inode: 3150891     Links: 1
	I0722 11:21:04.385377   42537 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 11:21:04.385386   42537 command_runner.go:130] > Access: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385408   42537 command_runner.go:130] > Modify: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385420   42537 command_runner.go:130] > Change: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385429   42537 command_runner.go:130] >  Birth: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385483   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:21:04.390960   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.391013   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:21:04.396552   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.396614   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:21:04.402020   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.402230   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:21:04.407480   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.407716   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:21:04.413035   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.413234   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:21:04.418442   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.418627   42537 kubeadm.go:392] StartCluster: {Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:21:04.418763   42537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:21:04.418813   42537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:21:04.454578   42537 command_runner.go:130] > c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83
	I0722 11:21:04.454607   42537 command_runner.go:130] > c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035
	I0722 11:21:04.454616   42537 command_runner.go:130] > 1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93
	I0722 11:21:04.454625   42537 command_runner.go:130] > 1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe
	I0722 11:21:04.454634   42537 command_runner.go:130] > 702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c
	I0722 11:21:04.454642   42537 command_runner.go:130] > 41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f
	I0722 11:21:04.454648   42537 command_runner.go:130] > 9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e
	I0722 11:21:04.454655   42537 command_runner.go:130] > 3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4
	I0722 11:21:04.454678   42537 cri.go:89] found id: "c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83"
	I0722 11:21:04.454689   42537 cri.go:89] found id: "c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035"
	I0722 11:21:04.454697   42537 cri.go:89] found id: "1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93"
	I0722 11:21:04.454705   42537 cri.go:89] found id: "1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe"
	I0722 11:21:04.454712   42537 cri.go:89] found id: "702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c"
	I0722 11:21:04.454716   42537 cri.go:89] found id: "41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f"
	I0722 11:21:04.454723   42537 cri.go:89] found id: "9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e"
	I0722 11:21:04.454727   42537 cri.go:89] found id: "3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4"
	I0722 11:21:04.454732   42537 cri.go:89] found id: ""
	I0722 11:21:04.454779   42537 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.577231863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647368577208805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=923adf0d-63dc-47eb-b919-ec79cc2c93f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.577613431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=569c53d9-2b0d-47b1-941f-d812a962e1b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.577722732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=569c53d9-2b0d-47b1-941f-d812a962e1b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.580059763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=569c53d9-2b0d-47b1-941f-d812a962e1b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.634108456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eff97e13-d393-4331-860c-3e44ae26ca40 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.634184768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eff97e13-d393-4331-860c-3e44ae26ca40 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.635299716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0402cbcc-3010-4460-9dd5-464596e7bde2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.635736497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647368635714547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0402cbcc-3010-4460-9dd5-464596e7bde2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.636318095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=935b5d8f-8548-41fa-90d5-0bf689183546 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.636370951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=935b5d8f-8548-41fa-90d5-0bf689183546 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.636707906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=935b5d8f-8548-41fa-90d5-0bf689183546 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.680720088Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bad29d85-02ed-4c0c-9a57-e64526005dc4 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.680811540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bad29d85-02ed-4c0c-9a57-e64526005dc4 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.682229329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2946a1c9-341c-49d6-a273-b6402ce7bd54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.682791933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647368682767351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2946a1c9-341c-49d6-a273-b6402ce7bd54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.683491460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2c92176-e5fd-4e34-826b-ceef8bfeca66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.683570148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2c92176-e5fd-4e34-826b-ceef8bfeca66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.683926538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2c92176-e5fd-4e34-826b-ceef8bfeca66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.731527809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51351c7b-c807-4e52-9e3c-50c6dd5db4d9 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.731620300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51351c7b-c807-4e52-9e3c-50c6dd5db4d9 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.733106191Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4961dcce-1359-4250-81e0-dc3edd61ecad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.733648529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647368733624222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4961dcce-1359-4250-81e0-dc3edd61ecad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.734203624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e8a9f77-8e09-4ed0-ad7a-6ffaa321c63c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.734263357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e8a9f77-8e09-4ed0-ad7a-6ffaa321c63c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:22:48 multinode-025157 crio[2875]: time="2024-07-22 11:22:48.734835297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e8a9f77-8e09-4ed0-ad7a-6ffaa321c63c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e7c2a3493f614       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   71cbd29654a04       busybox-fc5497c4f-65kqg
	76c5daaeebe61       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   b00a720f793e1       kindnet-ksk8n
	c3322a06af35c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   4ce09d606f177       coredns-7db6d8ff4d-knmjk
	47f89de775577       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   adea1c11ada9b       storage-provisioner
	d20b67e53b9c3       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   fd958ec8bd796       kube-proxy-xv25n
	f120afa0b4168       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   092da20f3dc36       kube-scheduler-multinode-025157
	b0b58aa965a53       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   beaf0d5fae8ca       etcd-multinode-025157
	1025ae107db52       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   c4746e933e221       kube-controller-manager-multinode-025157
	0f533b9177b28       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   149431d9bab72       kube-apiserver-multinode-025157
	e8a0458e53e93       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   41e04960852e7       busybox-fc5497c4f-65kqg
	c6cee19e34e4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   23dc5d18c3dc0       coredns-7db6d8ff4d-knmjk
	c8ee5f6d8a84c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c0012afb7ace6       storage-provisioner
	1fe3af5c01ec9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago        Exited              kindnet-cni               0                   4b46d6216eff4       kindnet-ksk8n
	1c87ae4461133       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago        Exited              kube-proxy                0                   d6ec778d8382d       kube-proxy-xv25n
	702ffe223ffbd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   93b897045f2a6       kube-scheduler-multinode-025157
	41200509492ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   f0b860b9d01a7       etcd-multinode-025157
	9fcf31453e06d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   e04d4710c2a66       kube-controller-manager-multinode-025157
	3a756aa97fb8a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   fb4864e3abe41       kube-apiserver-multinode-025157
	
	
	==> coredns [c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54443 - 43872 "HINFO IN 332896090760497034.4179894359598936415. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010483354s
	
	
	==> coredns [c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83] <==
	[INFO] 10.244.1.2:37567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974075s
	[INFO] 10.244.1.2:41722 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102124s
	[INFO] 10.244.1.2:52402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080501s
	[INFO] 10.244.1.2:51810 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001293499s
	[INFO] 10.244.1.2:55946 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065316s
	[INFO] 10.244.1.2:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076315s
	[INFO] 10.244.1.2:55302 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057978s
	[INFO] 10.244.0.3:56688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087646s
	[INFO] 10.244.0.3:37771 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010968s
	[INFO] 10.244.0.3:34446 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053275s
	[INFO] 10.244.0.3:58786 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066903s
	[INFO] 10.244.1.2:60707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121612s
	[INFO] 10.244.1.2:36258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101944s
	[INFO] 10.244.1.2:36236 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085878s
	[INFO] 10.244.1.2:45146 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096153s
	[INFO] 10.244.0.3:58546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086945s
	[INFO] 10.244.0.3:36364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109212s
	[INFO] 10.244.0.3:52804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076438s
	[INFO] 10.244.0.3:49762 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067055s
	[INFO] 10.244.1.2:60768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135531s
	[INFO] 10.244.1.2:44434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111812s
	[INFO] 10.244.1.2:50074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122017s
	[INFO] 10.244.1.2:40866 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078556s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-025157
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025157
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=multinode-025157
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_14_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025157
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:22:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:15:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    multinode-025157
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fa8e3a447ff48e793af8a35e95c1e84
	  System UUID:                6fa8e3a4-47ff-48e7-93af-8a35e95c1e84
	  Boot ID:                    9c2c6869-d639-4ee9-9aed-fbe6e9f60df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-65kqg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 coredns-7db6d8ff4d-knmjk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m2s
	  kube-system                 etcd-multinode-025157                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m16s
	  kube-system                 kindnet-ksk8n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-multinode-025157             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-controller-manager-multinode-025157    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-proxy-xv25n                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-multinode-025157             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m59s                kube-proxy       
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m16s                kubelet          Node multinode-025157 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m16s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s                kubelet          Node multinode-025157 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s                kubelet          Node multinode-025157 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m16s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m3s                 node-controller  Node multinode-025157 event: Registered Node multinode-025157 in Controller
	  Normal  NodeReady                7m47s                kubelet          Node multinode-025157 status is now: NodeReady
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-025157 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-025157 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-025157 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node multinode-025157 event: Registered Node multinode-025157 in Controller
	
	
	Name:               multinode-025157-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025157-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=multinode-025157
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T11_21_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:21:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025157-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:22:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:21:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:21:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:21:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:22:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    multinode-025157-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7973d658f5b44133b42872bf02fb84fd
	  System UUID:                7973d658-f5b4-4133-b428-72bf02fb84fd
	  Boot ID:                    d9b4f628-a5d1-4aed-9450-79a68f15d012
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xp74m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-5wd8q              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m19s
	  kube-system                 kube-proxy-psdlq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m14s                  kube-proxy  
	  Normal  Starting                 53s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet     Node multinode-025157-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet     Node multinode-025157-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet     Node multinode-025157-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m1s                   kubelet     Node multinode-025157-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-025157-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-025157-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-025157-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-025157-m02 status is now: NodeReady
	
	
	Name:               multinode-025157-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025157-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=multinode-025157
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T11_22_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:22:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025157-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:22:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:22:45 +0000   Mon, 22 Jul 2024 11:22:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:22:45 +0000   Mon, 22 Jul 2024 11:22:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:22:45 +0000   Mon, 22 Jul 2024 11:22:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:22:45 +0000   Mon, 22 Jul 2024 11:22:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    multinode-025157-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d1fd7ffb914172b3375adc10ebfad4
	  System UUID:                55d1fd7f-fb91-4172-b337-5adc10ebfad4
	  Boot ID:                    8ea63369-d72c-4c9e-91c1-6227f9143517
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zgpkm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m30s
	  kube-system                 kube-proxy-4n82n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m24s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m37s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m30s (x2 over 6m30s)  kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x2 over 6m30s)  kubelet     Node multinode-025157-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x2 over 6m30s)  kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m12s                  kubelet     Node multinode-025157-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m43s (x2 over 5m43s)  kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m43s)  kubelet     Node multinode-025157-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m43s (x2 over 5m43s)  kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m25s                  kubelet     Node multinode-025157-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 22s)      kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 22s)      kubelet     Node multinode-025157-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 22s)      kubelet     Node multinode-025157-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-025157-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059845] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056408] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.121955] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.268206] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.124916] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.738105] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.059262] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.509482] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.079680] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.758835] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.782243] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[Jul22 11:15] kauditd_printk_skb: 60 callbacks suppressed
	[ +48.067026] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:21] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.141906] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.176952] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +0.155282] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.272379] systemd-fstab-generator[2858]: Ignoring "noauto" option for root device
	[  +1.895007] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +1.882517] systemd-fstab-generator[3085]: Ignoring "noauto" option for root device
	[  +0.810928] kauditd_printk_skb: 144 callbacks suppressed
	[ +16.804251] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.246350] systemd-fstab-generator[3901]: Ignoring "noauto" option for root device
	[ +17.560638] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f] <==
	{"level":"info","ts":"2024-07-22T11:15:30.350897Z","caller":"traceutil/trace.go:171","msg":"trace[67404300] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"193.412177ms","start":"2024-07-22T11:15:30.157477Z","end":"2024-07-22T11:15:30.350889Z","steps":["trace[67404300] 'process raft request'  (duration: 193.033467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:15:30.351078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.078248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025157-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-22T11:15:30.351101Z","caller":"traceutil/trace.go:171","msg":"trace[565475713] range","detail":"{range_begin:/registry/minions/multinode-025157-m02; range_end:; response_count:1; response_revision:452; }","duration":"147.184841ms","start":"2024-07-22T11:15:30.20391Z","end":"2024-07-22T11:15:30.351094Z","steps":["trace[565475713] 'agreement among raft nodes before linearized reading'  (duration: 147.039584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:15:38.493632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.825303ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17619648383778651630 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:465 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T11:15:38.493719Z","caller":"traceutil/trace.go:171","msg":"trace[1683772628] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:522; }","duration":"125.597666ms","start":"2024-07-22T11:15:38.36811Z","end":"2024-07-22T11:15:38.493708Z","steps":["trace[1683772628] 'read index received'  (duration: 3.25798ms)","trace[1683772628] 'applied index is now lower than readState.Index'  (duration: 122.338308ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T11:15:38.493805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.689311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025157-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-07-22T11:15:38.493841Z","caller":"traceutil/trace.go:171","msg":"trace[534512718] range","detail":"{range_begin:/registry/minions/multinode-025157-m02; range_end:; response_count:1; response_revision:496; }","duration":"125.748608ms","start":"2024-07-22T11:15:38.368086Z","end":"2024-07-22T11:15:38.493835Z","steps":["trace[534512718] 'agreement among raft nodes before linearized reading'  (duration: 125.661881ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:15:38.494055Z","caller":"traceutil/trace.go:171","msg":"trace[1151554224] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"267.665186ms","start":"2024-07-22T11:15:38.226334Z","end":"2024-07-22T11:15:38.493999Z","steps":["trace[1151554224] 'process raft request'  (duration: 145.089411ms)","trace[1151554224] 'compare'  (duration: 121.109519ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T11:16:19.621159Z","caller":"traceutil/trace.go:171","msg":"trace[1662668731] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"185.865288ms","start":"2024-07-22T11:16:19.435266Z","end":"2024-07-22T11:16:19.621132Z","steps":["trace[1662668731] 'process raft request'  (duration: 185.831104ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:16:19.621411Z","caller":"traceutil/trace.go:171","msg":"trace[290881786] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"232.548282ms","start":"2024-07-22T11:16:19.388853Z","end":"2024-07-22T11:16:19.621401Z","steps":["trace[290881786] 'process raft request'  (duration: 148.815885ms)","trace[290881786] 'compare'  (duration: 83.252455ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T11:16:19.62152Z","caller":"traceutil/trace.go:171","msg":"trace[163786640] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"222.148608ms","start":"2024-07-22T11:16:19.399365Z","end":"2024-07-22T11:16:19.621514Z","steps":["trace[163786640] 'read index received'  (duration: 138.313621ms)","trace[163786640] 'applied index is now lower than readState.Index'  (duration: 83.83448ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T11:16:19.621719Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.29869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-22T11:16:19.621762Z","caller":"traceutil/trace.go:171","msg":"trace[599180933] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:581; }","duration":"222.411891ms","start":"2024-07-22T11:16:19.399345Z","end":"2024-07-22T11:16:19.621756Z","steps":["trace[599180933] 'agreement among raft nodes before linearized reading'  (duration: 222.297194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:16:19.621854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.793881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-025157-m03.17e484ce58fcaf6f\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T11:16:19.621891Z","caller":"traceutil/trace.go:171","msg":"trace[488886405] range","detail":"{range_begin:/registry/events/default/multinode-025157-m03.17e484ce58fcaf6f; range_end:; response_count:0; response_revision:581; }","duration":"186.887229ms","start":"2024-07-22T11:16:19.434995Z","end":"2024-07-22T11:16:19.621883Z","steps":["trace[488886405] 'agreement among raft nodes before linearized reading'  (duration: 186.842498ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:19:29.852743Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T11:19:29.852869Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-025157","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
	{"level":"warn","ts":"2024-07-22T11:19:29.85297Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.852997Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.861726Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.861816Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T11:19:29.932868Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c2e3bdcd19c3f485","current-leader-member-id":"c2e3bdcd19c3f485"}
	{"level":"info","ts":"2024-07-22T11:19:29.935269Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:19:29.935456Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:19:29.935489Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-025157","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
	
	
	==> etcd [b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7] <==
	{"level":"info","ts":"2024-07-22T11:21:07.190435Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T11:21:07.190729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 switched to configuration voters=(14043276751669556357)"}
	{"level":"info","ts":"2024-07-22T11:21:07.190801Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","added-peer-id":"c2e3bdcd19c3f485","added-peer-peer-urls":["https://192.168.39.158:2380"]}
	{"level":"info","ts":"2024-07-22T11:21:07.190938Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:07.190986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:07.205712Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:21:07.205937Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c2e3bdcd19c3f485","initial-advertise-peer-urls":["https://192.168.39.158:2380"],"listen-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.158:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:21:07.209764Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:21:07.209798Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:21:07.205993Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:21:08.534494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgPreVoteResp from c2e3bdcd19c3f485 at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgVoteResp from c2e3bdcd19c3f485 at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2e3bdcd19c3f485 elected leader c2e3bdcd19c3f485 at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.539332Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c2e3bdcd19c3f485","local-member-attributes":"{Name:multinode-025157 ClientURLs:[https://192.168.39.158:2379]}","request-path":"/0/members/c2e3bdcd19c3f485/attributes","cluster-id":"632f2ed81879f448","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:21:08.53945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:08.539492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:21:08.539502Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:21:08.539459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:08.541668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:21:08.541807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.158:2379"}
	{"level":"info","ts":"2024-07-22T11:22:32.336502Z","caller":"traceutil/trace.go:171","msg":"trace[805025319] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"102.559202ms","start":"2024-07-22T11:22:32.233902Z","end":"2024-07-22T11:22:32.336461Z","steps":["trace[805025319] 'process raft request'  (duration: 58.646145ms)","trace[805025319] 'compare'  (duration: 43.385281ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:22:49 up 8 min,  0 users,  load average: 0.26, 0.20, 0.14
	Linux multinode-025157 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93] <==
	I0722 11:18:41.841239       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:18:51.837461       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:18:51.837526       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:18:51.837678       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:18:51.837704       1 main.go:299] handling current node
	I0722 11:18:51.837729       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:18:51.837748       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:01.842827       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:01.842927       1 main.go:299] handling current node
	I0722 11:19:01.842965       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:01.842971       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:01.843179       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:01.843202       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:19:11.840470       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:11.840649       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:19:11.840825       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:11.840851       1 main.go:299] handling current node
	I0722 11:19:11.840872       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:11.840890       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:21.842534       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:21.842592       1 main.go:299] handling current node
	I0722 11:19:21.842614       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:21.842620       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:21.842768       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:21.842793       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b] <==
	I0722 11:22:01.746184       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:22:11.745122       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:22:11.745178       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:22:11.745376       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:22:11.745407       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:22:11.745461       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:22:11.745467       1 main.go:299] handling current node
	I0722 11:22:21.752313       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:22:21.752512       1 main.go:299] handling current node
	I0722 11:22:21.752561       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:22:21.752658       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:22:21.752816       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:22:21.752848       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:22:31.745841       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:22:31.745898       1 main.go:299] handling current node
	I0722 11:22:31.745915       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:22:31.745921       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:22:31.746164       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:22:31.746193       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.2.0/24] 
	I0722 11:22:41.745779       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:22:41.745832       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.2.0/24] 
	I0722 11:22:41.746001       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:22:41.746075       1 main.go:299] handling current node
	I0722 11:22:41.746091       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:22:41.746113       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f] <==
	I0722 11:21:09.846915       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 11:21:09.847046       1 policy_source.go:224] refreshing policies
	I0722 11:21:09.864918       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 11:21:09.864983       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 11:21:09.866329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 11:21:09.867532       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 11:21:09.868105       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 11:21:09.876198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 11:21:09.877683       1 shared_informer.go:320] Caches are synced for configmaps
	E0722 11:21:09.882790       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 11:21:09.885270       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 11:21:09.903135       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 11:21:09.903249       1 aggregator.go:165] initial CRD sync complete...
	I0722 11:21:09.903294       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 11:21:09.903318       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 11:21:09.903340       1 cache.go:39] Caches are synced for autoregister controller
	I0722 11:21:09.936729       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 11:21:10.785694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:21:11.930002       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 11:21:12.049892       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 11:21:12.065775       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 11:21:12.128747       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:21:12.136277       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:21:23.148107       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 11:21:23.347849       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4] <==
	I0722 11:14:31.786684       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0722 11:14:31.791647       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0722 11:14:31.791675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:14:32.300382       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:14:32.341172       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:14:32.387691       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0722 11:14:32.395821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.158]
	I0722 11:14:32.396547       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 11:14:32.400359       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 11:14:32.866678       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 11:14:33.306718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 11:14:33.327162       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 11:14:33.360472       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 11:14:46.868654       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0722 11:14:46.987355       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0722 11:15:52.974761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39490: use of closed network connection
	E0722 11:15:53.140919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39506: use of closed network connection
	E0722 11:15:53.316187       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39522: use of closed network connection
	E0722 11:15:53.486605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39550: use of closed network connection
	E0722 11:15:53.651403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39560: use of closed network connection
	E0722 11:15:54.093451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39600: use of closed network connection
	E0722 11:15:54.267177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39628: use of closed network connection
	E0722 11:15:54.434672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39640: use of closed network connection
	E0722 11:15:54.606159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39650: use of closed network connection
	I0722 11:19:29.860609       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658] <==
	I0722 11:21:23.750072       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 11:21:23.754355       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 11:21:23.754427       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 11:21:46.863835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.469608ms"
	I0722 11:21:46.871471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.183945ms"
	I0722 11:21:46.871541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.408µs"
	I0722 11:21:51.179436       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m02\" does not exist"
	I0722 11:21:51.192930       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m02" podCIDRs=["10.244.1.0/24"]
	I0722 11:21:52.800107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.438µs"
	I0722 11:21:53.081865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.638µs"
	I0722 11:21:53.106690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.443µs"
	I0722 11:21:53.114598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.48µs"
	I0722 11:21:53.128279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.43µs"
	I0722 11:21:53.135328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.475µs"
	I0722 11:21:53.137706       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.68µs"
	I0722 11:22:08.977510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:08.995687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.682µs"
	I0722 11:22:09.007155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.856µs"
	I0722 11:22:10.426087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.531976ms"
	I0722 11:22:10.426310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.464µs"
	I0722 11:22:27.012872       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:28.064188       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:28.064939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:22:28.073125       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.2.0/24"]
	I0722 11:22:45.799714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	
	
	==> kube-controller-manager [9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e] <==
	I0722 11:15:30.358770       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m02\" does not exist"
	I0722 11:15:30.390990       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m02" podCIDRs=["10.244.1.0/24"]
	I0722 11:15:31.094391       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025157-m02"
	I0722 11:15:48.177060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:15:50.412369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.480398ms"
	I0722 11:15:50.444749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.301494ms"
	I0722 11:15:50.460866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.005329ms"
	I0722 11:15:50.460968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.858µs"
	I0722 11:15:52.184169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.748566ms"
	I0722 11:15:52.185069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.984µs"
	I0722 11:15:52.575296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.793352ms"
	I0722 11:15:52.576157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.324µs"
	I0722 11:16:19.624810       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:16:19.625610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:16:19.688728       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.2.0/24"]
	I0722 11:16:21.115704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025157-m03"
	I0722 11:16:37.585069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:05.550243       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:06.659341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:06.659462       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:17:06.690359       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.3.0/24"]
	I0722 11:17:24.339545       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:18:06.167967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m03"
	I0722 11:18:06.208510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.418819ms"
	I0722 11:18:06.209328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.342µs"
	
	
	==> kube-proxy [1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe] <==
	I0722 11:14:49.251413       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:14:49.262542       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	I0722 11:14:49.298682       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:14:49.298769       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:14:49.298787       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:14:49.301415       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:14:49.301612       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:14:49.301806       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:14:49.303349       1 config.go:192] "Starting service config controller"
	I0722 11:14:49.303906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:14:49.311095       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:14:49.303704       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:14:49.311305       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:14:49.311327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:14:49.304694       1 config.go:319] "Starting node config controller"
	I0722 11:14:49.311442       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:14:49.311447       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763] <==
	I0722 11:21:10.780689       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:21:10.799311       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	I0722 11:21:10.879730       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:21:10.879834       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:21:10.879855       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:21:10.886910       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:21:10.887202       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:21:10.887230       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:21:10.891165       1 config.go:192] "Starting service config controller"
	I0722 11:21:10.891195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:21:10.891220       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:21:10.891224       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:21:10.891567       1 config.go:319] "Starting node config controller"
	I0722 11:21:10.891596       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:21:10.991354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:21:10.991422       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:21:10.992122       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c] <==
	E0722 11:14:30.902870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 11:14:30.902929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:14:30.902954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:14:31.784616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:14:31.784664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 11:14:31.819176       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:14:31.819224       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:14:31.836267       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:14:31.836309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 11:14:31.890391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:14:31.890418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:14:31.902208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:14:31.902232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:14:31.934503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:14:31.934544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 11:14:32.030813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:14:32.030855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:14:32.048468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:14:32.048550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:14:32.055883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:14:32.056057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:14:32.111970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:14:32.112260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0722 11:14:33.893092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 11:19:29.851795       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed] <==
	I0722 11:21:07.855570       1 serving.go:380] Generated self-signed cert in-memory
	W0722 11:21:09.863734       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 11:21:09.863836       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:21:09.863847       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 11:21:09.863856       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 11:21:09.888283       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 11:21:09.888394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:21:09.890644       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 11:21:09.890683       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 11:21:09.891344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 11:21:09.891411       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 11:21:09.991262       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 11:21:07 multinode-025157 kubelet[3092]: E0722 11:21:07.001620    3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.158:8443: connect: connection refused
	Jul 22 11:21:07 multinode-025157 kubelet[3092]: I0722 11:21:07.495760    3092 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025157"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.920925    3092 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025157"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.921352    3092 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025157"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.922608    3092 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.923817    3092 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.969220    3092 apiserver.go:52] "Watching apiserver"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.972849    3092 topology_manager.go:215] "Topology Admit Handler" podUID="6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67" podNamespace="kube-system" podName="kindnet-ksk8n"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.972977    3092 topology_manager.go:215] "Topology Admit Handler" podUID="f84e764d-47ca-4634-be5b-aec35a978516" podNamespace="kube-system" podName="kube-proxy-xv25n"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.973088    3092 topology_manager.go:215] "Topology Admit Handler" podUID="5934987b-a9ec-4a7d-a446-b8a8c686ab04" podNamespace="kube-system" podName="coredns-7db6d8ff4d-knmjk"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.973137    3092 topology_manager.go:215] "Topology Admit Handler" podUID="629c8fdb-9801-4ad0-857f-22817bc60e17" podNamespace="kube-system" podName="storage-provisioner"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.973190    3092 topology_manager.go:215] "Topology Admit Handler" podUID="103ec644-0628-4056-a814-044f38ece31f" podNamespace="default" podName="busybox-fc5497c4f-65kqg"
	Jul 22 11:21:09 multinode-025157 kubelet[3092]: I0722 11:21:09.988929    3092 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046401    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67-lib-modules\") pod \"kindnet-ksk8n\" (UID: \"6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67\") " pod="kube-system/kindnet-ksk8n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046514    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84e764d-47ca-4634-be5b-aec35a978516-xtables-lock\") pod \"kube-proxy-xv25n\" (UID: \"f84e764d-47ca-4634-be5b-aec35a978516\") " pod="kube-system/kube-proxy-xv25n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046583    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67-cni-cfg\") pod \"kindnet-ksk8n\" (UID: \"6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67\") " pod="kube-system/kindnet-ksk8n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046624    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84e764d-47ca-4634-be5b-aec35a978516-lib-modules\") pod \"kube-proxy-xv25n\" (UID: \"f84e764d-47ca-4634-be5b-aec35a978516\") " pod="kube-system/kube-proxy-xv25n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046688    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/629c8fdb-9801-4ad0-857f-22817bc60e17-tmp\") pod \"storage-provisioner\" (UID: \"629c8fdb-9801-4ad0-857f-22817bc60e17\") " pod="kube-system/storage-provisioner"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046730    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67-xtables-lock\") pod \"kindnet-ksk8n\" (UID: \"6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67\") " pod="kube-system/kindnet-ksk8n"
	Jul 22 11:21:14 multinode-025157 kubelet[3092]: I0722 11:21:14.848678    3092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 22 11:22:06 multinode-025157 kubelet[3092]: E0722 11:22:06.036147    3092 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:22:48.312304   43644 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19313-5960/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-025157 -n multinode-025157
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025157 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 stop
E0722 11:23:29.088884   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025157 stop: exit status 82 (2m0.448930346s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-025157-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-025157 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025157 status: exit status 3 (18.855619669s)

                                                
                                                
-- stdout --
	multinode-025157
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025157-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:25:11.796719   44300 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0722 11:25:11.796754   44300 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-025157 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-025157 -n multinode-025157
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-025157 logs -n 25: (1.457593661s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157:/home/docker/cp-test_multinode-025157-m02_multinode-025157.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157 sudo cat                                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m02_multinode-025157.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03:/home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157-m03 sudo cat                                   | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp testdata/cp-test.txt                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157:/home/docker/cp-test_multinode-025157-m03_multinode-025157.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157 sudo cat                                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m03_multinode-025157.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt                       | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m02:/home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n                                                                 | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | multinode-025157-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025157 ssh -n multinode-025157-m02 sudo cat                                   | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	|         | /home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-025157 node stop m03                                                          | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:16 UTC |
	| node    | multinode-025157 node start                                                             | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:16 UTC | 22 Jul 24 11:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-025157                                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:17 UTC |                     |
	| stop    | -p multinode-025157                                                                     | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:17 UTC |                     |
	| start   | -p multinode-025157                                                                     | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:19 UTC | 22 Jul 24 11:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-025157                                                                | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:22 UTC |                     |
	| node    | multinode-025157 node delete                                                            | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:22 UTC | 22 Jul 24 11:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-025157 stop                                                                   | multinode-025157 | jenkins | v1.33.1 | 22 Jul 24 11:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:19:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:19:28.940446   42537 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:19:28.940685   42537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:19:28.940694   42537 out.go:304] Setting ErrFile to fd 2...
	I0722 11:19:28.940698   42537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:19:28.940915   42537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:19:28.941477   42537 out.go:298] Setting JSON to false
	I0722 11:19:28.942336   42537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3721,"bootTime":1721643448,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:19:28.942389   42537 start.go:139] virtualization: kvm guest
	I0722 11:19:28.944497   42537 out.go:177] * [multinode-025157] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:19:28.946055   42537 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:19:28.946062   42537 notify.go:220] Checking for updates...
	I0722 11:19:28.947401   42537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:19:28.948672   42537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:19:28.949955   42537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:19:28.951238   42537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:19:28.952427   42537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:19:28.953971   42537 config.go:182] Loaded profile config "multinode-025157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:19:28.954062   42537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:19:28.954469   42537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:19:28.954532   42537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:19:28.970503   42537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
	I0722 11:19:28.970893   42537 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:19:28.971401   42537 main.go:141] libmachine: Using API Version  1
	I0722 11:19:28.971424   42537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:19:28.971734   42537 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:19:28.971900   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.006534   42537 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:19:29.007598   42537 start.go:297] selected driver: kvm2
	I0722 11:19:29.007614   42537 start.go:901] validating driver "kvm2" against &{Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:19:29.007752   42537 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:19:29.008077   42537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:19:29.008147   42537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:19:29.022299   42537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:19:29.022947   42537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:19:29.022973   42537 cni.go:84] Creating CNI manager for ""
	I0722 11:19:29.022980   42537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 11:19:29.023075   42537 start.go:340] cluster config:
	{Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:19:29.023217   42537 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:19:29.024811   42537 out.go:177] * Starting "multinode-025157" primary control-plane node in "multinode-025157" cluster
	I0722 11:19:29.025859   42537 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:19:29.025892   42537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:19:29.025902   42537 cache.go:56] Caching tarball of preloaded images
	I0722 11:19:29.025974   42537 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:19:29.025985   42537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:19:29.026095   42537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/config.json ...
	I0722 11:19:29.026269   42537 start.go:360] acquireMachinesLock for multinode-025157: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:19:29.026308   42537 start.go:364] duration metric: took 23.362µs to acquireMachinesLock for "multinode-025157"
	I0722 11:19:29.026324   42537 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:19:29.026331   42537 fix.go:54] fixHost starting: 
	I0722 11:19:29.026559   42537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:19:29.026589   42537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:19:29.039750   42537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0722 11:19:29.040179   42537 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:19:29.040639   42537 main.go:141] libmachine: Using API Version  1
	I0722 11:19:29.040660   42537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:19:29.041007   42537 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:19:29.041167   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.041306   42537 main.go:141] libmachine: (multinode-025157) Calling .GetState
	I0722 11:19:29.042992   42537 fix.go:112] recreateIfNeeded on multinode-025157: state=Running err=<nil>
	W0722 11:19:29.043014   42537 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:19:29.044770   42537 out.go:177] * Updating the running kvm2 "multinode-025157" VM ...
	I0722 11:19:29.045928   42537 machine.go:94] provisionDockerMachine start ...
	I0722 11:19:29.045942   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:19:29.046105   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.048762   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.049224   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.049251   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.049380   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.049510   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.049629   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.049773   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.049941   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.050149   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.050160   42537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:19:29.165276   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025157
	
	I0722 11:19:29.165303   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.165541   42537 buildroot.go:166] provisioning hostname "multinode-025157"
	I0722 11:19:29.165563   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.165734   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.168107   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.168463   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.168489   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.168627   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.168807   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.168970   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.169097   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.169269   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.169456   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.169473   42537 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025157 && echo "multinode-025157" | sudo tee /etc/hostname
	I0722 11:19:29.298983   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025157
	
	I0722 11:19:29.299006   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.301852   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.302163   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.302191   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.302369   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.302534   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.302670   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.302773   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.302914   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.303067   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.303082   42537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025157/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:19:29.417111   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:19:29.417153   42537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:19:29.417172   42537 buildroot.go:174] setting up certificates
	I0722 11:19:29.417183   42537 provision.go:84] configureAuth start
	I0722 11:19:29.417196   42537 main.go:141] libmachine: (multinode-025157) Calling .GetMachineName
	I0722 11:19:29.417440   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:19:29.420100   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.420472   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.420492   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.420624   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.422573   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.422851   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.422880   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.423008   42537 provision.go:143] copyHostCerts
	I0722 11:19:29.423040   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:19:29.423071   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:19:29.423082   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:19:29.423150   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:19:29.423220   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:19:29.423239   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:19:29.423246   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:19:29.423270   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:19:29.423308   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:19:29.423323   42537 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:19:29.423332   42537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:19:29.423353   42537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:19:29.423394   42537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.multinode-025157 san=[127.0.0.1 192.168.39.158 localhost minikube multinode-025157]
	I0722 11:19:29.573434   42537 provision.go:177] copyRemoteCerts
	I0722 11:19:29.573492   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:19:29.573516   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.576337   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.576724   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.576749   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.576952   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.577149   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.577290   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.577419   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:19:29.664064   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 11:19:29.664123   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:19:29.688486   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 11:19:29.688553   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 11:19:29.713460   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 11:19:29.713524   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:19:29.737998   42537 provision.go:87] duration metric: took 320.802381ms to configureAuth
	I0722 11:19:29.738024   42537 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:19:29.738216   42537 config.go:182] Loaded profile config "multinode-025157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:19:29.738278   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:19:29.741159   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.741547   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:19:29.741578   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:19:29.741730   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:19:29.741937   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.742098   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:19:29.742258   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:19:29.742415   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:19:29.742573   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:19:29.742588   42537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:21:00.596780   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:21:00.596811   42537 machine.go:97] duration metric: took 1m31.550872531s to provisionDockerMachine
	I0722 11:21:00.596822   42537 start.go:293] postStartSetup for "multinode-025157" (driver="kvm2")
	I0722 11:21:00.596843   42537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:21:00.596858   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.597214   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:21:00.597249   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.600268   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.600701   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.600727   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.600833   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.600997   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.601146   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.601300   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.686695   42537 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:21:00.690774   42537 command_runner.go:130] > NAME=Buildroot
	I0722 11:21:00.690795   42537 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 11:21:00.690801   42537 command_runner.go:130] > ID=buildroot
	I0722 11:21:00.690808   42537 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 11:21:00.690815   42537 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 11:21:00.690942   42537 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:21:00.690968   42537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:21:00.691029   42537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:21:00.691127   42537 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:21:00.691147   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /etc/ssl/certs/130982.pem
	I0722 11:21:00.691250   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:21:00.700630   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:21:00.725131   42537 start.go:296] duration metric: took 128.296952ms for postStartSetup
	I0722 11:21:00.725180   42537 fix.go:56] duration metric: took 1m31.698847619s for fixHost
	I0722 11:21:00.725206   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.727581   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.727884   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.727917   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.728059   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.728275   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.728433   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.728580   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.728748   42537 main.go:141] libmachine: Using SSH client type: native
	I0722 11:21:00.728899   42537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0722 11:21:00.728909   42537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:21:00.840916   42537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721647260.805135741
	
	I0722 11:21:00.840942   42537 fix.go:216] guest clock: 1721647260.805135741
	I0722 11:21:00.840953   42537 fix.go:229] Guest: 2024-07-22 11:21:00.805135741 +0000 UTC Remote: 2024-07-22 11:21:00.725187922 +0000 UTC m=+91.817074186 (delta=79.947819ms)
	I0722 11:21:00.841003   42537 fix.go:200] guest clock delta is within tolerance: 79.947819ms
	I0722 11:21:00.841014   42537 start.go:83] releasing machines lock for "multinode-025157", held for 1m31.814696704s
	I0722 11:21:00.841042   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.841284   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:21:00.843841   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.844226   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.844254   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.844410   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.844914   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.845079   42537 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:21:00.845143   42537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:21:00.845195   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.845322   42537 ssh_runner.go:195] Run: cat /version.json
	I0722 11:21:00.845346   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:21:00.847793   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848073   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848160   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.848185   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848284   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.848424   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:00.848445   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:00.848456   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.848622   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:21:00.848629   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.848830   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.848861   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:21:00.848974   42537 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:21:00.849113   42537 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:21:00.960240   42537 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0722 11:21:00.960906   42537 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 11:21:00.961098   42537 ssh_runner.go:195] Run: systemctl --version
	I0722 11:21:00.966849   42537 command_runner.go:130] > systemd 252 (252)
	I0722 11:21:00.966889   42537 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 11:21:00.966933   42537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:21:01.121958   42537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 11:21:01.128969   42537 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 11:21:01.129068   42537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:21:01.129122   42537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:21:01.138300   42537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 11:21:01.138316   42537 start.go:495] detecting cgroup driver to use...
	I0722 11:21:01.138369   42537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:21:01.156166   42537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:21:01.169523   42537 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:21:01.169563   42537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:21:01.182893   42537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:21:01.196040   42537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:21:01.346539   42537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:21:01.489776   42537 docker.go:233] disabling docker service ...
	I0722 11:21:01.489855   42537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:21:01.509896   42537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:21:01.523880   42537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:21:01.669671   42537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:21:01.818848   42537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:21:01.833156   42537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:21:01.851042   42537 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0722 11:21:01.851335   42537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:21:01.851388   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.861459   42537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:21:01.861522   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.871283   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.881195   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.890990   42537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:21:01.901596   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.911997   42537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.923462   42537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:21:01.934514   42537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:21:01.944374   42537 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 11:21:01.944429   42537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:21:01.954074   42537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:21:02.088313   42537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:21:03.517176   42537 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.428825339s)
	I0722 11:21:03.517204   42537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:21:03.517255   42537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:21:03.522308   42537 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0722 11:21:03.522327   42537 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 11:21:03.522336   42537 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0722 11:21:03.522346   42537 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 11:21:03.522354   42537 command_runner.go:130] > Access: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522362   42537 command_runner.go:130] > Modify: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522373   42537 command_runner.go:130] > Change: 2024-07-22 11:21:03.377422846 +0000
	I0722 11:21:03.522382   42537 command_runner.go:130] >  Birth: -
	I0722 11:21:03.522404   42537 start.go:563] Will wait 60s for crictl version
	I0722 11:21:03.522444   42537 ssh_runner.go:195] Run: which crictl
	I0722 11:21:03.526170   42537 command_runner.go:130] > /usr/bin/crictl
	I0722 11:21:03.526217   42537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:21:03.560726   42537 command_runner.go:130] > Version:  0.1.0
	I0722 11:21:03.560751   42537 command_runner.go:130] > RuntimeName:  cri-o
	I0722 11:21:03.560757   42537 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0722 11:21:03.560764   42537 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 11:21:03.560786   42537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:21:03.560852   42537 ssh_runner.go:195] Run: crio --version
	I0722 11:21:03.589061   42537 command_runner.go:130] > crio version 1.29.1
	I0722 11:21:03.589078   42537 command_runner.go:130] > Version:        1.29.1
	I0722 11:21:03.589084   42537 command_runner.go:130] > GitCommit:      unknown
	I0722 11:21:03.589089   42537 command_runner.go:130] > GitCommitDate:  unknown
	I0722 11:21:03.589092   42537 command_runner.go:130] > GitTreeState:   clean
	I0722 11:21:03.589097   42537 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 11:21:03.589102   42537 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 11:21:03.589106   42537 command_runner.go:130] > Compiler:       gc
	I0722 11:21:03.589110   42537 command_runner.go:130] > Platform:       linux/amd64
	I0722 11:21:03.589114   42537 command_runner.go:130] > Linkmode:       dynamic
	I0722 11:21:03.589119   42537 command_runner.go:130] > BuildTags:      
	I0722 11:21:03.589125   42537 command_runner.go:130] >   containers_image_ostree_stub
	I0722 11:21:03.589130   42537 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 11:21:03.589135   42537 command_runner.go:130] >   btrfs_noversion
	I0722 11:21:03.589142   42537 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 11:21:03.589158   42537 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 11:21:03.589163   42537 command_runner.go:130] >   seccomp
	I0722 11:21:03.589172   42537 command_runner.go:130] > LDFlags:          unknown
	I0722 11:21:03.589177   42537 command_runner.go:130] > SeccompEnabled:   true
	I0722 11:21:03.589181   42537 command_runner.go:130] > AppArmorEnabled:  false
	I0722 11:21:03.589284   42537 ssh_runner.go:195] Run: crio --version
	I0722 11:21:03.622360   42537 command_runner.go:130] > crio version 1.29.1
	I0722 11:21:03.622383   42537 command_runner.go:130] > Version:        1.29.1
	I0722 11:21:03.622391   42537 command_runner.go:130] > GitCommit:      unknown
	I0722 11:21:03.622398   42537 command_runner.go:130] > GitCommitDate:  unknown
	I0722 11:21:03.622404   42537 command_runner.go:130] > GitTreeState:   clean
	I0722 11:21:03.622412   42537 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 11:21:03.622417   42537 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 11:21:03.622421   42537 command_runner.go:130] > Compiler:       gc
	I0722 11:21:03.622425   42537 command_runner.go:130] > Platform:       linux/amd64
	I0722 11:21:03.622430   42537 command_runner.go:130] > Linkmode:       dynamic
	I0722 11:21:03.622440   42537 command_runner.go:130] > BuildTags:      
	I0722 11:21:03.622446   42537 command_runner.go:130] >   containers_image_ostree_stub
	I0722 11:21:03.622452   42537 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 11:21:03.622458   42537 command_runner.go:130] >   btrfs_noversion
	I0722 11:21:03.622466   42537 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 11:21:03.622476   42537 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 11:21:03.622484   42537 command_runner.go:130] >   seccomp
	I0722 11:21:03.622494   42537 command_runner.go:130] > LDFlags:          unknown
	I0722 11:21:03.622500   42537 command_runner.go:130] > SeccompEnabled:   true
	I0722 11:21:03.622512   42537 command_runner.go:130] > AppArmorEnabled:  false
	I0722 11:21:03.625182   42537 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:21:03.626538   42537 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:21:03.629111   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:03.629506   42537 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:21:03.629530   42537 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:21:03.629738   42537 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:21:03.634272   42537 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0722 11:21:03.634472   42537 kubeadm.go:883] updating cluster {Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:21:03.634665   42537 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:21:03.634737   42537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:21:03.688677   42537 command_runner.go:130] > {
	I0722 11:21:03.688702   42537 command_runner.go:130] >   "images": [
	I0722 11:21:03.688708   42537 command_runner.go:130] >     {
	I0722 11:21:03.688719   42537 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 11:21:03.688728   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688737   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 11:21:03.688743   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688749   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688761   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 11:21:03.688772   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 11:21:03.688777   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688784   42537 command_runner.go:130] >       "size": "87165492",
	I0722 11:21:03.688792   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688797   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688805   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688809   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688813   42537 command_runner.go:130] >     },
	I0722 11:21:03.688816   42537 command_runner.go:130] >     {
	I0722 11:21:03.688822   42537 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 11:21:03.688826   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688831   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 11:21:03.688834   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688839   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688846   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 11:21:03.688854   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 11:21:03.688857   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688861   42537 command_runner.go:130] >       "size": "87174707",
	I0722 11:21:03.688864   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688872   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688879   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688884   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688888   42537 command_runner.go:130] >     },
	I0722 11:21:03.688891   42537 command_runner.go:130] >     {
	I0722 11:21:03.688897   42537 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 11:21:03.688901   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688906   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 11:21:03.688909   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688913   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688920   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 11:21:03.688929   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 11:21:03.688933   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688937   42537 command_runner.go:130] >       "size": "1363676",
	I0722 11:21:03.688941   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.688948   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.688952   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.688955   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.688959   42537 command_runner.go:130] >     },
	I0722 11:21:03.688962   42537 command_runner.go:130] >     {
	I0722 11:21:03.688968   42537 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 11:21:03.688972   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.688976   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 11:21:03.688980   42537 command_runner.go:130] >       ],
	I0722 11:21:03.688984   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.688991   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 11:21:03.689002   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 11:21:03.689006   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689009   42537 command_runner.go:130] >       "size": "31470524",
	I0722 11:21:03.689013   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689017   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689021   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689025   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689030   42537 command_runner.go:130] >     },
	I0722 11:21:03.689033   42537 command_runner.go:130] >     {
	I0722 11:21:03.689038   42537 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 11:21:03.689042   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689047   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 11:21:03.689053   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689057   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689066   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 11:21:03.689078   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 11:21:03.689085   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689093   42537 command_runner.go:130] >       "size": "61245718",
	I0722 11:21:03.689100   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689107   42537 command_runner.go:130] >       "username": "nonroot",
	I0722 11:21:03.689113   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689119   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689127   42537 command_runner.go:130] >     },
	I0722 11:21:03.689131   42537 command_runner.go:130] >     {
	I0722 11:21:03.689153   42537 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 11:21:03.689161   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689168   42537 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 11:21:03.689176   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689182   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689194   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 11:21:03.689207   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 11:21:03.689215   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689220   42537 command_runner.go:130] >       "size": "150779692",
	I0722 11:21:03.689225   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689229   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689235   42537 command_runner.go:130] >       },
	I0722 11:21:03.689239   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689245   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689248   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689252   42537 command_runner.go:130] >     },
	I0722 11:21:03.689256   42537 command_runner.go:130] >     {
	I0722 11:21:03.689264   42537 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 11:21:03.689268   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689273   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 11:21:03.689277   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689280   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689289   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 11:21:03.689296   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 11:21:03.689302   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689306   42537 command_runner.go:130] >       "size": "117609954",
	I0722 11:21:03.689312   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689315   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689323   42537 command_runner.go:130] >       },
	I0722 11:21:03.689327   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689331   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689335   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689341   42537 command_runner.go:130] >     },
	I0722 11:21:03.689344   42537 command_runner.go:130] >     {
	I0722 11:21:03.689353   42537 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 11:21:03.689357   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689365   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 11:21:03.689369   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689374   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689388   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 11:21:03.689398   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 11:21:03.689404   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689408   42537 command_runner.go:130] >       "size": "112198984",
	I0722 11:21:03.689414   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689418   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689423   42537 command_runner.go:130] >       },
	I0722 11:21:03.689427   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689431   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689434   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689438   42537 command_runner.go:130] >     },
	I0722 11:21:03.689441   42537 command_runner.go:130] >     {
	I0722 11:21:03.689446   42537 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 11:21:03.689450   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689455   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 11:21:03.689458   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689462   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689469   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 11:21:03.689475   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 11:21:03.689478   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689482   42537 command_runner.go:130] >       "size": "85953945",
	I0722 11:21:03.689486   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.689490   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689495   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689499   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689502   42537 command_runner.go:130] >     },
	I0722 11:21:03.689505   42537 command_runner.go:130] >     {
	I0722 11:21:03.689511   42537 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 11:21:03.689515   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689521   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 11:21:03.689525   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689532   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689539   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 11:21:03.689548   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 11:21:03.689553   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689557   42537 command_runner.go:130] >       "size": "63051080",
	I0722 11:21:03.689565   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689570   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.689575   42537 command_runner.go:130] >       },
	I0722 11:21:03.689579   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689584   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689589   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.689600   42537 command_runner.go:130] >     },
	I0722 11:21:03.689605   42537 command_runner.go:130] >     {
	I0722 11:21:03.689612   42537 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 11:21:03.689621   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.689628   42537 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 11:21:03.689636   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689641   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.689655   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 11:21:03.689666   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 11:21:03.689673   42537 command_runner.go:130] >       ],
	I0722 11:21:03.689679   42537 command_runner.go:130] >       "size": "750414",
	I0722 11:21:03.689687   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.689694   42537 command_runner.go:130] >         "value": "65535"
	I0722 11:21:03.689701   42537 command_runner.go:130] >       },
	I0722 11:21:03.689705   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.689709   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.689713   42537 command_runner.go:130] >       "pinned": true
	I0722 11:21:03.689718   42537 command_runner.go:130] >     }
	I0722 11:21:03.689722   42537 command_runner.go:130] >   ]
	I0722 11:21:03.689725   42537 command_runner.go:130] > }
	I0722 11:21:03.689924   42537 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:21:03.689938   42537 crio.go:433] Images already preloaded, skipping extraction
	I0722 11:21:03.689981   42537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:21:03.724684   42537 command_runner.go:130] > {
	I0722 11:21:03.724712   42537 command_runner.go:130] >   "images": [
	I0722 11:21:03.724719   42537 command_runner.go:130] >     {
	I0722 11:21:03.724733   42537 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 11:21:03.724741   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724750   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 11:21:03.724757   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724763   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.724774   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 11:21:03.724785   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 11:21:03.724793   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724801   42537 command_runner.go:130] >       "size": "87165492",
	I0722 11:21:03.724812   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.724819   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.724831   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.724842   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.724849   42537 command_runner.go:130] >     },
	I0722 11:21:03.724856   42537 command_runner.go:130] >     {
	I0722 11:21:03.724870   42537 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 11:21:03.724878   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724891   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 11:21:03.724901   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724910   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.724920   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 11:21:03.724929   42537 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 11:21:03.724936   42537 command_runner.go:130] >       ],
	I0722 11:21:03.724941   42537 command_runner.go:130] >       "size": "87174707",
	I0722 11:21:03.724947   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.724956   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.724963   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.724967   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.724971   42537 command_runner.go:130] >     },
	I0722 11:21:03.724977   42537 command_runner.go:130] >     {
	I0722 11:21:03.724985   42537 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 11:21:03.724992   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.724997   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 11:21:03.725018   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725026   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725033   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 11:21:03.725043   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 11:21:03.725049   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725054   42537 command_runner.go:130] >       "size": "1363676",
	I0722 11:21:03.725058   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725062   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725067   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725073   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725077   42537 command_runner.go:130] >     },
	I0722 11:21:03.725083   42537 command_runner.go:130] >     {
	I0722 11:21:03.725089   42537 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 11:21:03.725100   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725108   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 11:21:03.725115   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725119   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725126   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 11:21:03.725146   42537 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 11:21:03.725152   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725157   42537 command_runner.go:130] >       "size": "31470524",
	I0722 11:21:03.725163   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725167   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725174   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725179   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725185   42537 command_runner.go:130] >     },
	I0722 11:21:03.725189   42537 command_runner.go:130] >     {
	I0722 11:21:03.725196   42537 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 11:21:03.725203   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725208   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 11:21:03.725215   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725219   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725229   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 11:21:03.725239   42537 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 11:21:03.725245   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725250   42537 command_runner.go:130] >       "size": "61245718",
	I0722 11:21:03.725257   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725262   42537 command_runner.go:130] >       "username": "nonroot",
	I0722 11:21:03.725269   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725273   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725279   42537 command_runner.go:130] >     },
	I0722 11:21:03.725283   42537 command_runner.go:130] >     {
	I0722 11:21:03.725292   42537 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 11:21:03.725298   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725302   42537 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 11:21:03.725308   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725313   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725326   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 11:21:03.725337   42537 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 11:21:03.725344   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725348   42537 command_runner.go:130] >       "size": "150779692",
	I0722 11:21:03.725355   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725359   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725366   42537 command_runner.go:130] >       },
	I0722 11:21:03.725371   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725377   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725382   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725388   42537 command_runner.go:130] >     },
	I0722 11:21:03.725392   42537 command_runner.go:130] >     {
	I0722 11:21:03.725400   42537 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 11:21:03.725407   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725412   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 11:21:03.725418   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725423   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725433   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 11:21:03.725442   42537 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 11:21:03.725448   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725453   42537 command_runner.go:130] >       "size": "117609954",
	I0722 11:21:03.725459   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725463   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725467   42537 command_runner.go:130] >       },
	I0722 11:21:03.725473   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725484   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725491   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725495   42537 command_runner.go:130] >     },
	I0722 11:21:03.725501   42537 command_runner.go:130] >     {
	I0722 11:21:03.725507   42537 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 11:21:03.725513   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725519   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 11:21:03.725525   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725529   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725551   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 11:21:03.725562   42537 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 11:21:03.725570   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725574   42537 command_runner.go:130] >       "size": "112198984",
	I0722 11:21:03.725581   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725585   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725591   42537 command_runner.go:130] >       },
	I0722 11:21:03.725596   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725602   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725606   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725612   42537 command_runner.go:130] >     },
	I0722 11:21:03.725618   42537 command_runner.go:130] >     {
	I0722 11:21:03.725632   42537 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 11:21:03.725643   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725651   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 11:21:03.725660   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725667   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725682   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 11:21:03.725697   42537 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 11:21:03.725707   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725714   42537 command_runner.go:130] >       "size": "85953945",
	I0722 11:21:03.725725   42537 command_runner.go:130] >       "uid": null,
	I0722 11:21:03.725732   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725742   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725750   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725756   42537 command_runner.go:130] >     },
	I0722 11:21:03.725760   42537 command_runner.go:130] >     {
	I0722 11:21:03.725779   42537 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 11:21:03.725787   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725792   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 11:21:03.725798   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725803   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725812   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 11:21:03.725820   42537 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 11:21:03.725826   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725830   42537 command_runner.go:130] >       "size": "63051080",
	I0722 11:21:03.725836   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725841   42537 command_runner.go:130] >         "value": "0"
	I0722 11:21:03.725848   42537 command_runner.go:130] >       },
	I0722 11:21:03.725859   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725866   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725871   42537 command_runner.go:130] >       "pinned": false
	I0722 11:21:03.725877   42537 command_runner.go:130] >     },
	I0722 11:21:03.725881   42537 command_runner.go:130] >     {
	I0722 11:21:03.725889   42537 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 11:21:03.725895   42537 command_runner.go:130] >       "repoTags": [
	I0722 11:21:03.725900   42537 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 11:21:03.725907   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725911   42537 command_runner.go:130] >       "repoDigests": [
	I0722 11:21:03.725920   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 11:21:03.725929   42537 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 11:21:03.725935   42537 command_runner.go:130] >       ],
	I0722 11:21:03.725939   42537 command_runner.go:130] >       "size": "750414",
	I0722 11:21:03.725946   42537 command_runner.go:130] >       "uid": {
	I0722 11:21:03.725950   42537 command_runner.go:130] >         "value": "65535"
	I0722 11:21:03.725956   42537 command_runner.go:130] >       },
	I0722 11:21:03.725960   42537 command_runner.go:130] >       "username": "",
	I0722 11:21:03.725967   42537 command_runner.go:130] >       "spec": null,
	I0722 11:21:03.725971   42537 command_runner.go:130] >       "pinned": true
	I0722 11:21:03.725977   42537 command_runner.go:130] >     }
	I0722 11:21:03.725980   42537 command_runner.go:130] >   ]
	I0722 11:21:03.725986   42537 command_runner.go:130] > }
	I0722 11:21:03.726119   42537 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:21:03.726131   42537 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:21:03.726137   42537 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.30.3 crio true true} ...
	I0722 11:21:03.726247   42537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:21:03.726313   42537 ssh_runner.go:195] Run: crio config
	I0722 11:21:03.759944   42537 command_runner.go:130] ! time="2024-07-22 11:21:03.724083465Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0722 11:21:03.766245   42537 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0722 11:21:03.778388   42537 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0722 11:21:03.778406   42537 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0722 11:21:03.778412   42537 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0722 11:21:03.778415   42537 command_runner.go:130] > #
	I0722 11:21:03.778422   42537 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0722 11:21:03.778428   42537 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0722 11:21:03.778433   42537 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0722 11:21:03.778441   42537 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0722 11:21:03.778446   42537 command_runner.go:130] > # reload'.
	I0722 11:21:03.778455   42537 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0722 11:21:03.778465   42537 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0722 11:21:03.778480   42537 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0722 11:21:03.778489   42537 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0722 11:21:03.778495   42537 command_runner.go:130] > [crio]
	I0722 11:21:03.778504   42537 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0722 11:21:03.778511   42537 command_runner.go:130] > # containers images, in this directory.
	I0722 11:21:03.778522   42537 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0722 11:21:03.778533   42537 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0722 11:21:03.778544   42537 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0722 11:21:03.778553   42537 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0722 11:21:03.778559   42537 command_runner.go:130] > # imagestore = ""
	I0722 11:21:03.778568   42537 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0722 11:21:03.778574   42537 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0722 11:21:03.778580   42537 command_runner.go:130] > storage_driver = "overlay"
	I0722 11:21:03.778587   42537 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0722 11:21:03.778593   42537 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0722 11:21:03.778597   42537 command_runner.go:130] > storage_option = [
	I0722 11:21:03.778602   42537 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0722 11:21:03.778608   42537 command_runner.go:130] > ]
	I0722 11:21:03.778614   42537 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0722 11:21:03.778621   42537 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0722 11:21:03.778631   42537 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0722 11:21:03.778639   42537 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0722 11:21:03.778650   42537 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0722 11:21:03.778657   42537 command_runner.go:130] > # always happen on a node reboot
	I0722 11:21:03.778665   42537 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0722 11:21:03.778678   42537 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0722 11:21:03.778690   42537 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0722 11:21:03.778697   42537 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0722 11:21:03.778708   42537 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0722 11:21:03.778720   42537 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0722 11:21:03.778737   42537 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0722 11:21:03.778743   42537 command_runner.go:130] > # internal_wipe = true
	I0722 11:21:03.778752   42537 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0722 11:21:03.778759   42537 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0722 11:21:03.778763   42537 command_runner.go:130] > # internal_repair = false
	I0722 11:21:03.778770   42537 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0722 11:21:03.778776   42537 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0722 11:21:03.778783   42537 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0722 11:21:03.778788   42537 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0722 11:21:03.778796   42537 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0722 11:21:03.778799   42537 command_runner.go:130] > [crio.api]
	I0722 11:21:03.778810   42537 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0722 11:21:03.778817   42537 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0722 11:21:03.778822   42537 command_runner.go:130] > # IP address on which the stream server will listen.
	I0722 11:21:03.778828   42537 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0722 11:21:03.778834   42537 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0722 11:21:03.778842   42537 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0722 11:21:03.778846   42537 command_runner.go:130] > # stream_port = "0"
	I0722 11:21:03.778852   42537 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0722 11:21:03.778861   42537 command_runner.go:130] > # stream_enable_tls = false
	I0722 11:21:03.778867   42537 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0722 11:21:03.778873   42537 command_runner.go:130] > # stream_idle_timeout = ""
	I0722 11:21:03.778880   42537 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0722 11:21:03.778887   42537 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0722 11:21:03.778891   42537 command_runner.go:130] > # minutes.
	I0722 11:21:03.778895   42537 command_runner.go:130] > # stream_tls_cert = ""
	I0722 11:21:03.778902   42537 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0722 11:21:03.778907   42537 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0722 11:21:03.778913   42537 command_runner.go:130] > # stream_tls_key = ""
	I0722 11:21:03.778919   42537 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0722 11:21:03.778927   42537 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0722 11:21:03.778941   42537 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0722 11:21:03.778947   42537 command_runner.go:130] > # stream_tls_ca = ""
	I0722 11:21:03.778954   42537 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 11:21:03.778960   42537 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0722 11:21:03.778967   42537 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 11:21:03.778974   42537 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0722 11:21:03.778980   42537 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0722 11:21:03.778987   42537 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0722 11:21:03.778991   42537 command_runner.go:130] > [crio.runtime]
	I0722 11:21:03.778997   42537 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0722 11:21:03.779004   42537 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0722 11:21:03.779008   42537 command_runner.go:130] > # "nofile=1024:2048"
	I0722 11:21:03.779016   42537 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0722 11:21:03.779022   42537 command_runner.go:130] > # default_ulimits = [
	I0722 11:21:03.779026   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779034   42537 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0722 11:21:03.779040   42537 command_runner.go:130] > # no_pivot = false
	I0722 11:21:03.779045   42537 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0722 11:21:03.779053   42537 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0722 11:21:03.779060   42537 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0722 11:21:03.779066   42537 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0722 11:21:03.779073   42537 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0722 11:21:03.779079   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 11:21:03.779086   42537 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0722 11:21:03.779090   42537 command_runner.go:130] > # Cgroup setting for conmon
	I0722 11:21:03.779098   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0722 11:21:03.779104   42537 command_runner.go:130] > conmon_cgroup = "pod"
	I0722 11:21:03.779118   42537 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0722 11:21:03.779125   42537 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0722 11:21:03.779131   42537 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 11:21:03.779137   42537 command_runner.go:130] > conmon_env = [
	I0722 11:21:03.779142   42537 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 11:21:03.779147   42537 command_runner.go:130] > ]
	I0722 11:21:03.779152   42537 command_runner.go:130] > # Additional environment variables to set for all the
	I0722 11:21:03.779157   42537 command_runner.go:130] > # containers. These are overridden if set in the
	I0722 11:21:03.779162   42537 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0722 11:21:03.779168   42537 command_runner.go:130] > # default_env = [
	I0722 11:21:03.779171   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779178   42537 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0722 11:21:03.779185   42537 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0722 11:21:03.779191   42537 command_runner.go:130] > # selinux = false
	I0722 11:21:03.779197   42537 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0722 11:21:03.779205   42537 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0722 11:21:03.779211   42537 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0722 11:21:03.779217   42537 command_runner.go:130] > # seccomp_profile = ""
	I0722 11:21:03.779222   42537 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0722 11:21:03.779230   42537 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0722 11:21:03.779237   42537 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0722 11:21:03.779242   42537 command_runner.go:130] > # which might increase security.
	I0722 11:21:03.779248   42537 command_runner.go:130] > # This option is currently deprecated,
	I0722 11:21:03.779253   42537 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0722 11:21:03.779260   42537 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0722 11:21:03.779266   42537 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0722 11:21:03.779273   42537 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0722 11:21:03.779282   42537 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0722 11:21:03.779288   42537 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0722 11:21:03.779295   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779300   42537 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0722 11:21:03.779307   42537 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0722 11:21:03.779311   42537 command_runner.go:130] > # the cgroup blockio controller.
	I0722 11:21:03.779317   42537 command_runner.go:130] > # blockio_config_file = ""
	I0722 11:21:03.779323   42537 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0722 11:21:03.779329   42537 command_runner.go:130] > # blockio parameters.
	I0722 11:21:03.779333   42537 command_runner.go:130] > # blockio_reload = false
	I0722 11:21:03.779342   42537 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0722 11:21:03.779348   42537 command_runner.go:130] > # irqbalance daemon.
	I0722 11:21:03.779353   42537 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0722 11:21:03.779361   42537 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0722 11:21:03.779367   42537 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0722 11:21:03.779375   42537 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0722 11:21:03.779383   42537 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0722 11:21:03.779390   42537 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0722 11:21:03.779397   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779401   42537 command_runner.go:130] > # rdt_config_file = ""
	I0722 11:21:03.779406   42537 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0722 11:21:03.779412   42537 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0722 11:21:03.779426   42537 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0722 11:21:03.779432   42537 command_runner.go:130] > # separate_pull_cgroup = ""
	I0722 11:21:03.779438   42537 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0722 11:21:03.779446   42537 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0722 11:21:03.779452   42537 command_runner.go:130] > # will be added.
	I0722 11:21:03.779456   42537 command_runner.go:130] > # default_capabilities = [
	I0722 11:21:03.779461   42537 command_runner.go:130] > # 	"CHOWN",
	I0722 11:21:03.779465   42537 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0722 11:21:03.779470   42537 command_runner.go:130] > # 	"FSETID",
	I0722 11:21:03.779474   42537 command_runner.go:130] > # 	"FOWNER",
	I0722 11:21:03.779479   42537 command_runner.go:130] > # 	"SETGID",
	I0722 11:21:03.779483   42537 command_runner.go:130] > # 	"SETUID",
	I0722 11:21:03.779487   42537 command_runner.go:130] > # 	"SETPCAP",
	I0722 11:21:03.779493   42537 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0722 11:21:03.779496   42537 command_runner.go:130] > # 	"KILL",
	I0722 11:21:03.779500   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779507   42537 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0722 11:21:03.779515   42537 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0722 11:21:03.779523   42537 command_runner.go:130] > # add_inheritable_capabilities = false
	I0722 11:21:03.779529   42537 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0722 11:21:03.779536   42537 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 11:21:03.779542   42537 command_runner.go:130] > default_sysctls = [
	I0722 11:21:03.779547   42537 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0722 11:21:03.779552   42537 command_runner.go:130] > ]
	I0722 11:21:03.779557   42537 command_runner.go:130] > # List of devices on the host that a
	I0722 11:21:03.779564   42537 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0722 11:21:03.779568   42537 command_runner.go:130] > # allowed_devices = [
	I0722 11:21:03.779574   42537 command_runner.go:130] > # 	"/dev/fuse",
	I0722 11:21:03.779577   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779586   42537 command_runner.go:130] > # List of additional devices. specified as
	I0722 11:21:03.779595   42537 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0722 11:21:03.779602   42537 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0722 11:21:03.779608   42537 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 11:21:03.779614   42537 command_runner.go:130] > # additional_devices = [
	I0722 11:21:03.779618   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779628   42537 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0722 11:21:03.779637   42537 command_runner.go:130] > # cdi_spec_dirs = [
	I0722 11:21:03.779642   42537 command_runner.go:130] > # 	"/etc/cdi",
	I0722 11:21:03.779649   42537 command_runner.go:130] > # 	"/var/run/cdi",
	I0722 11:21:03.779654   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779663   42537 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0722 11:21:03.779675   42537 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0722 11:21:03.779684   42537 command_runner.go:130] > # Defaults to false.
	I0722 11:21:03.779692   42537 command_runner.go:130] > # device_ownership_from_security_context = false
	I0722 11:21:03.779704   42537 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0722 11:21:03.779716   42537 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0722 11:21:03.779726   42537 command_runner.go:130] > # hooks_dir = [
	I0722 11:21:03.779736   42537 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0722 11:21:03.779741   42537 command_runner.go:130] > # ]
	I0722 11:21:03.779749   42537 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0722 11:21:03.779755   42537 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0722 11:21:03.779763   42537 command_runner.go:130] > # its default mounts from the following two files:
	I0722 11:21:03.779766   42537 command_runner.go:130] > #
	I0722 11:21:03.779776   42537 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0722 11:21:03.779785   42537 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0722 11:21:03.779792   42537 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0722 11:21:03.779797   42537 command_runner.go:130] > #
	I0722 11:21:03.779803   42537 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0722 11:21:03.779811   42537 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0722 11:21:03.779819   42537 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0722 11:21:03.779823   42537 command_runner.go:130] > #      only add mounts it finds in this file.
	I0722 11:21:03.779828   42537 command_runner.go:130] > #
	I0722 11:21:03.779832   42537 command_runner.go:130] > # default_mounts_file = ""
	I0722 11:21:03.779839   42537 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0722 11:21:03.779845   42537 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0722 11:21:03.779851   42537 command_runner.go:130] > pids_limit = 1024
	I0722 11:21:03.779858   42537 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0722 11:21:03.779868   42537 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0722 11:21:03.779877   42537 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0722 11:21:03.779886   42537 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0722 11:21:03.779893   42537 command_runner.go:130] > # log_size_max = -1
	I0722 11:21:03.779899   42537 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0722 11:21:03.779906   42537 command_runner.go:130] > # log_to_journald = false
	I0722 11:21:03.779912   42537 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0722 11:21:03.779918   42537 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0722 11:21:03.779923   42537 command_runner.go:130] > # Path to directory for container attach sockets.
	I0722 11:21:03.779930   42537 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0722 11:21:03.779935   42537 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0722 11:21:03.779941   42537 command_runner.go:130] > # bind_mount_prefix = ""
	I0722 11:21:03.779946   42537 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0722 11:21:03.779952   42537 command_runner.go:130] > # read_only = false
	I0722 11:21:03.779958   42537 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0722 11:21:03.779966   42537 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0722 11:21:03.779971   42537 command_runner.go:130] > # live configuration reload.
	I0722 11:21:03.779976   42537 command_runner.go:130] > # log_level = "info"
	I0722 11:21:03.779980   42537 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0722 11:21:03.779987   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.779991   42537 command_runner.go:130] > # log_filter = ""
	I0722 11:21:03.779998   42537 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0722 11:21:03.780008   42537 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0722 11:21:03.780014   42537 command_runner.go:130] > # separated by comma.
	I0722 11:21:03.780021   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780026   42537 command_runner.go:130] > # uid_mappings = ""
	I0722 11:21:03.780032   42537 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0722 11:21:03.780039   42537 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0722 11:21:03.780044   42537 command_runner.go:130] > # separated by comma.
	I0722 11:21:03.780051   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780057   42537 command_runner.go:130] > # gid_mappings = ""
	I0722 11:21:03.780063   42537 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0722 11:21:03.780070   42537 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 11:21:03.780076   42537 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 11:21:03.780085   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780091   42537 command_runner.go:130] > # minimum_mappable_uid = -1
	I0722 11:21:03.780097   42537 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0722 11:21:03.780105   42537 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 11:21:03.780116   42537 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 11:21:03.780124   42537 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 11:21:03.780130   42537 command_runner.go:130] > # minimum_mappable_gid = -1
	I0722 11:21:03.780136   42537 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0722 11:21:03.780144   42537 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0722 11:21:03.780151   42537 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0722 11:21:03.780155   42537 command_runner.go:130] > # ctr_stop_timeout = 30
	I0722 11:21:03.780161   42537 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0722 11:21:03.780168   42537 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0722 11:21:03.780173   42537 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0722 11:21:03.780180   42537 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0722 11:21:03.780183   42537 command_runner.go:130] > drop_infra_ctr = false
	I0722 11:21:03.780191   42537 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0722 11:21:03.780198   42537 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0722 11:21:03.780205   42537 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0722 11:21:03.780211   42537 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0722 11:21:03.780218   42537 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0722 11:21:03.780225   42537 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0722 11:21:03.780233   42537 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0722 11:21:03.780237   42537 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0722 11:21:03.780243   42537 command_runner.go:130] > # shared_cpuset = ""
	I0722 11:21:03.780250   42537 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0722 11:21:03.780256   42537 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0722 11:21:03.780261   42537 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0722 11:21:03.780270   42537 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0722 11:21:03.780276   42537 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0722 11:21:03.780281   42537 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0722 11:21:03.780289   42537 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0722 11:21:03.780295   42537 command_runner.go:130] > # enable_criu_support = false
	I0722 11:21:03.780299   42537 command_runner.go:130] > # Enable/disable the generation of the container,
	I0722 11:21:03.780307   42537 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0722 11:21:03.780311   42537 command_runner.go:130] > # enable_pod_events = false
	I0722 11:21:03.780317   42537 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 11:21:03.780325   42537 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 11:21:03.780332   42537 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0722 11:21:03.780336   42537 command_runner.go:130] > # default_runtime = "runc"
	I0722 11:21:03.780342   42537 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0722 11:21:03.780349   42537 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0722 11:21:03.780359   42537 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0722 11:21:03.780366   42537 command_runner.go:130] > # creation as a file is not desired either.
	I0722 11:21:03.780374   42537 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0722 11:21:03.780392   42537 command_runner.go:130] > # the hostname is being managed dynamically.
	I0722 11:21:03.780400   42537 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0722 11:21:03.780408   42537 command_runner.go:130] > # ]
	I0722 11:21:03.780413   42537 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0722 11:21:03.780421   42537 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0722 11:21:03.780428   42537 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0722 11:21:03.780435   42537 command_runner.go:130] > # Each entry in the table should follow the format:
	I0722 11:21:03.780438   42537 command_runner.go:130] > #
	I0722 11:21:03.780445   42537 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0722 11:21:03.780450   42537 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0722 11:21:03.780475   42537 command_runner.go:130] > # runtime_type = "oci"
	I0722 11:21:03.780482   42537 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0722 11:21:03.780487   42537 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0722 11:21:03.780493   42537 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0722 11:21:03.780498   42537 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0722 11:21:03.780504   42537 command_runner.go:130] > # monitor_env = []
	I0722 11:21:03.780509   42537 command_runner.go:130] > # privileged_without_host_devices = false
	I0722 11:21:03.780515   42537 command_runner.go:130] > # allowed_annotations = []
	I0722 11:21:03.780520   42537 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0722 11:21:03.780525   42537 command_runner.go:130] > # Where:
	I0722 11:21:03.780531   42537 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0722 11:21:03.780538   42537 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0722 11:21:03.780547   42537 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0722 11:21:03.780553   42537 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0722 11:21:03.780559   42537 command_runner.go:130] > #   in $PATH.
	I0722 11:21:03.780565   42537 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0722 11:21:03.780571   42537 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0722 11:21:03.780577   42537 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0722 11:21:03.780582   42537 command_runner.go:130] > #   state.
	I0722 11:21:03.780588   42537 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0722 11:21:03.780596   42537 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0722 11:21:03.780604   42537 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0722 11:21:03.780611   42537 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0722 11:21:03.780618   42537 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0722 11:21:03.780631   42537 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0722 11:21:03.780641   42537 command_runner.go:130] > #   The currently recognized values are:
	I0722 11:21:03.780651   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0722 11:21:03.780664   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0722 11:21:03.780675   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0722 11:21:03.780685   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0722 11:21:03.780699   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0722 11:21:03.780709   42537 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0722 11:21:03.780717   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0722 11:21:03.780725   42537 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0722 11:21:03.780732   42537 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0722 11:21:03.780740   42537 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0722 11:21:03.780745   42537 command_runner.go:130] > #   deprecated option "conmon".
	I0722 11:21:03.780754   42537 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0722 11:21:03.780760   42537 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0722 11:21:03.780766   42537 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0722 11:21:03.780774   42537 command_runner.go:130] > #   should be moved to the container's cgroup
	I0722 11:21:03.780781   42537 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0722 11:21:03.780787   42537 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0722 11:21:03.780793   42537 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0722 11:21:03.780800   42537 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0722 11:21:03.780803   42537 command_runner.go:130] > #
	I0722 11:21:03.780810   42537 command_runner.go:130] > # Using the seccomp notifier feature:
	I0722 11:21:03.780813   42537 command_runner.go:130] > #
	I0722 11:21:03.780819   42537 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0722 11:21:03.780827   42537 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0722 11:21:03.780832   42537 command_runner.go:130] > #
	I0722 11:21:03.780839   42537 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0722 11:21:03.780848   42537 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0722 11:21:03.780853   42537 command_runner.go:130] > #
	I0722 11:21:03.780860   42537 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0722 11:21:03.780866   42537 command_runner.go:130] > # feature.
	I0722 11:21:03.780869   42537 command_runner.go:130] > #
	I0722 11:21:03.780875   42537 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0722 11:21:03.780881   42537 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0722 11:21:03.780889   42537 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0722 11:21:03.780897   42537 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0722 11:21:03.780903   42537 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0722 11:21:03.780908   42537 command_runner.go:130] > #
	I0722 11:21:03.780913   42537 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0722 11:21:03.780921   42537 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0722 11:21:03.780926   42537 command_runner.go:130] > #
	I0722 11:21:03.780932   42537 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0722 11:21:03.780939   42537 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0722 11:21:03.780945   42537 command_runner.go:130] > #
	I0722 11:21:03.780950   42537 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0722 11:21:03.780958   42537 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0722 11:21:03.780963   42537 command_runner.go:130] > # limitation.
	I0722 11:21:03.780968   42537 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0722 11:21:03.780975   42537 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0722 11:21:03.780979   42537 command_runner.go:130] > runtime_type = "oci"
	I0722 11:21:03.780985   42537 command_runner.go:130] > runtime_root = "/run/runc"
	I0722 11:21:03.780989   42537 command_runner.go:130] > runtime_config_path = ""
	I0722 11:21:03.780996   42537 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0722 11:21:03.781000   42537 command_runner.go:130] > monitor_cgroup = "pod"
	I0722 11:21:03.781006   42537 command_runner.go:130] > monitor_exec_cgroup = ""
	I0722 11:21:03.781010   42537 command_runner.go:130] > monitor_env = [
	I0722 11:21:03.781017   42537 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 11:21:03.781022   42537 command_runner.go:130] > ]
	I0722 11:21:03.781027   42537 command_runner.go:130] > privileged_without_host_devices = false
	I0722 11:21:03.781036   42537 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0722 11:21:03.781044   42537 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0722 11:21:03.781050   42537 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0722 11:21:03.781059   42537 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0722 11:21:03.781066   42537 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0722 11:21:03.781073   42537 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0722 11:21:03.781081   42537 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0722 11:21:03.781090   42537 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0722 11:21:03.781095   42537 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0722 11:21:03.781102   42537 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0722 11:21:03.781105   42537 command_runner.go:130] > # Example:
	I0722 11:21:03.781112   42537 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0722 11:21:03.781116   42537 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0722 11:21:03.781121   42537 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0722 11:21:03.781125   42537 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0722 11:21:03.781129   42537 command_runner.go:130] > # cpuset = 0
	I0722 11:21:03.781132   42537 command_runner.go:130] > # cpushares = "0-1"
	I0722 11:21:03.781135   42537 command_runner.go:130] > # Where:
	I0722 11:21:03.781139   42537 command_runner.go:130] > # The workload name is workload-type.
	I0722 11:21:03.781145   42537 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0722 11:21:03.781150   42537 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0722 11:21:03.781155   42537 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0722 11:21:03.781163   42537 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0722 11:21:03.781168   42537 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0722 11:21:03.781172   42537 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0722 11:21:03.781178   42537 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0722 11:21:03.781182   42537 command_runner.go:130] > # Default value is set to true
	I0722 11:21:03.781186   42537 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0722 11:21:03.781191   42537 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0722 11:21:03.781196   42537 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0722 11:21:03.781200   42537 command_runner.go:130] > # Default value is set to 'false'
	I0722 11:21:03.781204   42537 command_runner.go:130] > # disable_hostport_mapping = false
	I0722 11:21:03.781210   42537 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0722 11:21:03.781213   42537 command_runner.go:130] > #
	I0722 11:21:03.781218   42537 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0722 11:21:03.781223   42537 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0722 11:21:03.781229   42537 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0722 11:21:03.781234   42537 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0722 11:21:03.781239   42537 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0722 11:21:03.781243   42537 command_runner.go:130] > [crio.image]
	I0722 11:21:03.781248   42537 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0722 11:21:03.781254   42537 command_runner.go:130] > # default_transport = "docker://"
	I0722 11:21:03.781260   42537 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0722 11:21:03.781268   42537 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0722 11:21:03.781272   42537 command_runner.go:130] > # global_auth_file = ""
	I0722 11:21:03.781277   42537 command_runner.go:130] > # The image used to instantiate infra containers.
	I0722 11:21:03.781283   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.781288   42537 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0722 11:21:03.781297   42537 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0722 11:21:03.781302   42537 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0722 11:21:03.781309   42537 command_runner.go:130] > # This option supports live configuration reload.
	I0722 11:21:03.781313   42537 command_runner.go:130] > # pause_image_auth_file = ""
	I0722 11:21:03.781320   42537 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0722 11:21:03.781329   42537 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0722 11:21:03.781337   42537 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0722 11:21:03.781344   42537 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0722 11:21:03.781350   42537 command_runner.go:130] > # pause_command = "/pause"
	I0722 11:21:03.781356   42537 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0722 11:21:03.781364   42537 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0722 11:21:03.781370   42537 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0722 11:21:03.781378   42537 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0722 11:21:03.781386   42537 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0722 11:21:03.781394   42537 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0722 11:21:03.781400   42537 command_runner.go:130] > # pinned_images = [
	I0722 11:21:03.781403   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781412   42537 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0722 11:21:03.781420   42537 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0722 11:21:03.781426   42537 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0722 11:21:03.781434   42537 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0722 11:21:03.781441   42537 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0722 11:21:03.781445   42537 command_runner.go:130] > # signature_policy = ""
	I0722 11:21:03.781450   42537 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0722 11:21:03.781459   42537 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0722 11:21:03.781465   42537 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0722 11:21:03.781473   42537 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0722 11:21:03.781480   42537 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0722 11:21:03.781485   42537 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0722 11:21:03.781492   42537 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0722 11:21:03.781500   42537 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0722 11:21:03.781505   42537 command_runner.go:130] > # changing them here.
	I0722 11:21:03.781509   42537 command_runner.go:130] > # insecure_registries = [
	I0722 11:21:03.781514   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781521   42537 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0722 11:21:03.781527   42537 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0722 11:21:03.781536   42537 command_runner.go:130] > # image_volumes = "mkdir"
	I0722 11:21:03.781544   42537 command_runner.go:130] > # Temporary directory to use for storing big files
	I0722 11:21:03.781548   42537 command_runner.go:130] > # big_files_temporary_dir = ""
	I0722 11:21:03.781556   42537 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0722 11:21:03.781562   42537 command_runner.go:130] > # CNI plugins.
	I0722 11:21:03.781566   42537 command_runner.go:130] > [crio.network]
	I0722 11:21:03.781573   42537 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0722 11:21:03.781580   42537 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0722 11:21:03.781584   42537 command_runner.go:130] > # cni_default_network = ""
	I0722 11:21:03.781592   42537 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0722 11:21:03.781598   42537 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0722 11:21:03.781604   42537 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0722 11:21:03.781609   42537 command_runner.go:130] > # plugin_dirs = [
	I0722 11:21:03.781612   42537 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0722 11:21:03.781617   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781625   42537 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0722 11:21:03.781634   42537 command_runner.go:130] > [crio.metrics]
	I0722 11:21:03.781642   42537 command_runner.go:130] > # Globally enable or disable metrics support.
	I0722 11:21:03.781652   42537 command_runner.go:130] > enable_metrics = true
	I0722 11:21:03.781658   42537 command_runner.go:130] > # Specify enabled metrics collectors.
	I0722 11:21:03.781668   42537 command_runner.go:130] > # Per default all metrics are enabled.
	I0722 11:21:03.781682   42537 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0722 11:21:03.781694   42537 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0722 11:21:03.781705   42537 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0722 11:21:03.781715   42537 command_runner.go:130] > # metrics_collectors = [
	I0722 11:21:03.781720   42537 command_runner.go:130] > # 	"operations",
	I0722 11:21:03.781729   42537 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0722 11:21:03.781739   42537 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0722 11:21:03.781749   42537 command_runner.go:130] > # 	"operations_errors",
	I0722 11:21:03.781758   42537 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0722 11:21:03.781766   42537 command_runner.go:130] > # 	"image_pulls_by_name",
	I0722 11:21:03.781770   42537 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0722 11:21:03.781776   42537 command_runner.go:130] > # 	"image_pulls_failures",
	I0722 11:21:03.781780   42537 command_runner.go:130] > # 	"image_pulls_successes",
	I0722 11:21:03.781786   42537 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0722 11:21:03.781790   42537 command_runner.go:130] > # 	"image_layer_reuse",
	I0722 11:21:03.781797   42537 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0722 11:21:03.781800   42537 command_runner.go:130] > # 	"containers_oom_total",
	I0722 11:21:03.781804   42537 command_runner.go:130] > # 	"containers_oom",
	I0722 11:21:03.781810   42537 command_runner.go:130] > # 	"processes_defunct",
	I0722 11:21:03.781814   42537 command_runner.go:130] > # 	"operations_total",
	I0722 11:21:03.781821   42537 command_runner.go:130] > # 	"operations_latency_seconds",
	I0722 11:21:03.781825   42537 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0722 11:21:03.781832   42537 command_runner.go:130] > # 	"operations_errors_total",
	I0722 11:21:03.781836   42537 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0722 11:21:03.781842   42537 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0722 11:21:03.781846   42537 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0722 11:21:03.781852   42537 command_runner.go:130] > # 	"image_pulls_success_total",
	I0722 11:21:03.781857   42537 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0722 11:21:03.781864   42537 command_runner.go:130] > # 	"containers_oom_count_total",
	I0722 11:21:03.781869   42537 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0722 11:21:03.781875   42537 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0722 11:21:03.781879   42537 command_runner.go:130] > # ]
	I0722 11:21:03.781885   42537 command_runner.go:130] > # The port on which the metrics server will listen.
	I0722 11:21:03.781891   42537 command_runner.go:130] > # metrics_port = 9090
	I0722 11:21:03.781896   42537 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0722 11:21:03.781903   42537 command_runner.go:130] > # metrics_socket = ""
	I0722 11:21:03.781908   42537 command_runner.go:130] > # The certificate for the secure metrics server.
	I0722 11:21:03.781917   42537 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0722 11:21:03.781925   42537 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0722 11:21:03.781932   42537 command_runner.go:130] > # certificate on any modification event.
	I0722 11:21:03.781936   42537 command_runner.go:130] > # metrics_cert = ""
	I0722 11:21:03.781941   42537 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0722 11:21:03.781948   42537 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0722 11:21:03.781952   42537 command_runner.go:130] > # metrics_key = ""
	I0722 11:21:03.781959   42537 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0722 11:21:03.781962   42537 command_runner.go:130] > [crio.tracing]
	I0722 11:21:03.781969   42537 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0722 11:21:03.781973   42537 command_runner.go:130] > # enable_tracing = false
	I0722 11:21:03.781980   42537 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0722 11:21:03.781985   42537 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0722 11:21:03.781993   42537 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0722 11:21:03.781999   42537 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0722 11:21:03.782004   42537 command_runner.go:130] > # CRI-O NRI configuration.
	I0722 11:21:03.782010   42537 command_runner.go:130] > [crio.nri]
	I0722 11:21:03.782014   42537 command_runner.go:130] > # Globally enable or disable NRI.
	I0722 11:21:03.782020   42537 command_runner.go:130] > # enable_nri = false
	I0722 11:21:03.782024   42537 command_runner.go:130] > # NRI socket to listen on.
	I0722 11:21:03.782030   42537 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0722 11:21:03.782035   42537 command_runner.go:130] > # NRI plugin directory to use.
	I0722 11:21:03.782041   42537 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0722 11:21:03.782045   42537 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0722 11:21:03.782050   42537 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0722 11:21:03.782057   42537 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0722 11:21:03.782063   42537 command_runner.go:130] > # nri_disable_connections = false
	I0722 11:21:03.782068   42537 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0722 11:21:03.782074   42537 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0722 11:21:03.782079   42537 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0722 11:21:03.782085   42537 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0722 11:21:03.782092   42537 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0722 11:21:03.782097   42537 command_runner.go:130] > [crio.stats]
	I0722 11:21:03.782103   42537 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0722 11:21:03.782113   42537 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0722 11:21:03.782119   42537 command_runner.go:130] > # stats_collection_period = 0
	I0722 11:21:03.782216   42537 cni.go:84] Creating CNI manager for ""
	I0722 11:21:03.782225   42537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 11:21:03.782235   42537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:21:03.782252   42537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025157 NodeName:multinode-025157 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:21:03.782373   42537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025157"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:21:03.782425   42537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:21:03.792343   42537 command_runner.go:130] > kubeadm
	I0722 11:21:03.792361   42537 command_runner.go:130] > kubectl
	I0722 11:21:03.792367   42537 command_runner.go:130] > kubelet
	I0722 11:21:03.792397   42537 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:21:03.792448   42537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:21:03.801616   42537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0722 11:21:03.817663   42537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:21:03.833426   42537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0722 11:21:03.849475   42537 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0722 11:21:03.853170   42537 command_runner.go:130] > 192.168.39.158	control-plane.minikube.internal
	I0722 11:21:03.853289   42537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:21:03.987370   42537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:21:04.001972   42537 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157 for IP: 192.168.39.158
	I0722 11:21:04.001989   42537 certs.go:194] generating shared ca certs ...
	I0722 11:21:04.002003   42537 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:21:04.002173   42537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:21:04.002219   42537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:21:04.002229   42537 certs.go:256] generating profile certs ...
	I0722 11:21:04.002297   42537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/client.key
	I0722 11:21:04.002352   42537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key.268a156f
	I0722 11:21:04.002387   42537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key
	I0722 11:21:04.002397   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 11:21:04.002410   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 11:21:04.002420   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 11:21:04.002434   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 11:21:04.002451   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 11:21:04.002464   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 11:21:04.002476   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 11:21:04.002487   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 11:21:04.002535   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:21:04.002563   42537 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:21:04.002573   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:21:04.002592   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:21:04.002617   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:21:04.002636   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:21:04.002674   42537 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:21:04.002697   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.002710   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.002721   42537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem -> /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.003238   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:21:04.027286   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:21:04.050580   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:21:04.074001   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:21:04.097524   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:21:04.121581   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:21:04.145303   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:21:04.168560   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/multinode-025157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:21:04.192945   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:21:04.217786   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:21:04.242586   42537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:21:04.267099   42537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:21:04.285052   42537 ssh_runner.go:195] Run: openssl version
	I0722 11:21:04.290892   42537 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 11:21:04.290950   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:21:04.301513   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305743   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305870   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.305911   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:21:04.311308   42537 command_runner.go:130] > 3ec20f2e
	I0722 11:21:04.311509   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:21:04.320716   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:21:04.330987   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335478   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335497   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.335535   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:21:04.341576   42537 command_runner.go:130] > b5213941
	I0722 11:21:04.341711   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:21:04.351031   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:21:04.361549   42537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365746   42537 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365908   42537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.365936   42537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:21:04.371624   42537 command_runner.go:130] > 51391683
	I0722 11:21:04.371686   42537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:21:04.380575   42537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:21:04.385341   42537 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:21:04.385360   42537 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0722 11:21:04.385368   42537 command_runner.go:130] > Device: 253,1	Inode: 3150891     Links: 1
	I0722 11:21:04.385377   42537 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 11:21:04.385386   42537 command_runner.go:130] > Access: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385408   42537 command_runner.go:130] > Modify: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385420   42537 command_runner.go:130] > Change: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385429   42537 command_runner.go:130] >  Birth: 2024-07-22 11:14:24.190992226 +0000
	I0722 11:21:04.385483   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:21:04.390960   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.391013   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:21:04.396552   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.396614   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:21:04.402020   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.402230   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:21:04.407480   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.407716   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:21:04.413035   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.413234   42537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:21:04.418442   42537 command_runner.go:130] > Certificate will not expire
	I0722 11:21:04.418627   42537 kubeadm.go:392] StartCluster: {Name:multinode-025157 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-025157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.155 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.50 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:21:04.418763   42537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:21:04.418813   42537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:21:04.454578   42537 command_runner.go:130] > c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83
	I0722 11:21:04.454607   42537 command_runner.go:130] > c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035
	I0722 11:21:04.454616   42537 command_runner.go:130] > 1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93
	I0722 11:21:04.454625   42537 command_runner.go:130] > 1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe
	I0722 11:21:04.454634   42537 command_runner.go:130] > 702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c
	I0722 11:21:04.454642   42537 command_runner.go:130] > 41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f
	I0722 11:21:04.454648   42537 command_runner.go:130] > 9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e
	I0722 11:21:04.454655   42537 command_runner.go:130] > 3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4
	I0722 11:21:04.454678   42537 cri.go:89] found id: "c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83"
	I0722 11:21:04.454689   42537 cri.go:89] found id: "c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035"
	I0722 11:21:04.454697   42537 cri.go:89] found id: "1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93"
	I0722 11:21:04.454705   42537 cri.go:89] found id: "1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe"
	I0722 11:21:04.454712   42537 cri.go:89] found id: "702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c"
	I0722 11:21:04.454716   42537 cri.go:89] found id: "41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f"
	I0722 11:21:04.454723   42537 cri.go:89] found id: "9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e"
	I0722 11:21:04.454727   42537 cri.go:89] found id: "3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4"
	I0722 11:21:04.454732   42537 cri.go:89] found id: ""
	I0722 11:21:04.454779   42537 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.390108436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647512390085206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a967356-b5e5-4489-935a-209ad8245a50 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.390810952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=433aff73-0d6d-4e4a-ad3b-2930bc1baa96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.390861466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=433aff73-0d6d-4e4a-ad3b-2930bc1baa96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.391483805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=433aff73-0d6d-4e4a-ad3b-2930bc1baa96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.432149051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dab86be9-d9b9-4df3-b346-4b9a7b7b3723 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.432265718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dab86be9-d9b9-4df3-b346-4b9a7b7b3723 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.433908925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab48af78-546a-42a2-b3cc-d85012466db9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.434401376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647512434376131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab48af78-546a-42a2-b3cc-d85012466db9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.435143960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09da937a-3a2a-481e-9dee-8353147afb51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.435237631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09da937a-3a2a-481e-9dee-8353147afb51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.435613132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09da937a-3a2a-481e-9dee-8353147afb51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.478778503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb4efc39-7215-47ef-a57c-de9ca8ec0f59 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.478850096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb4efc39-7215-47ef-a57c-de9ca8ec0f59 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.480280042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d90ca8d1-37d7-4fac-9a1d-c9fc20052b92 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.480862018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647512480837101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d90ca8d1-37d7-4fac-9a1d-c9fc20052b92 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.481421897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=367cc0b9-4d78-47e3-b128-23f845de0635 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.481570358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=367cc0b9-4d78-47e3-b128-23f845de0635 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.481909212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=367cc0b9-4d78-47e3-b128-23f845de0635 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.523457731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a23f990d-51f0-44d3-bf53-ee0f0da284cb name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.523545026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a23f990d-51f0-44d3-bf53-ee0f0da284cb name=/runtime.v1.RuntimeService/Version
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.524766387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ebb7d65-a7d7-430b-afd7-4c960358276e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.525384445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721647512525360056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ebb7d65-a7d7-430b-afd7-4c960358276e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.526203472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d99cb851-bf43-4ecb-a95a-4590cce1e09e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.526260484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d99cb851-bf43-4ecb-a95a-4590cce1e09e name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:25:12 multinode-025157 crio[2875]: time="2024-07-22 11:25:12.529180525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c2a3493f61411824614c441e1d5f3533836e0c8afa72d53c7dd61281abbf00,PodSandboxId:71cbd29654a04cfa33acae2104625c2a7d7af11e2599abd56206c36754a3cbd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721647304228060094,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b,PodSandboxId:b00a720f793e1fc2cc6387f423af3e752daeb8645e98dcb7f4cdff3f14001902,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721647270731639193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735,PodSandboxId:4ce09d606f177ec101acc161c9fba1be8ea11505955da8064a551198e462c3c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721647270582575723,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f89de775577c02a3a6e9f6b3b8bda5f46ac61acae0d58173ad32707b6d8b90,PodSandboxId:adea1c11ada9b19c93f9c330f32bcd8ce0d505ab82405c44e26f9cdfab3e8a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721647270529261108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},An
notations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763,PodSandboxId:fd958ec8bd7960b18ca1d3908fa781444ce3fe61c5c1364c1424048851ee1dca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721647270487268395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed,PodSandboxId:092da20f3dc36d253ad1c2982763f0ece671b42ca41f94a45ded61a97c5174e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721647266712601703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7,PodSandboxId:beaf0d5fae8ca40535372ca52fed6f51ea7775a243ddc8229cfd377fe687d5aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721647266675536663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotations:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f,PodSandboxId:149431d9bab728eef6ef26857918eb67aaa7d5d04d02348ca9cdb9ba948ad9c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721647266642994402,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:map[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658,PodSandboxId:c4746e933e22198dd680a18b31b463bce38bf26805bcf1d6fe29d35e8ee39dca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721647266653238473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a0458e53e93890aa4a446e6e1dccdec0c646a741abf8893201983961a9db2f,PodSandboxId:41e04960852e730f8728e4c37f9c1e1fc8ff99b855b775238a6c58738f757ba9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721646951761206290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-65kqg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 103ec644-0628-4056-a814-044f38ece31f,},Annotations:map[string]string{io.kubernetes.container.hash: 318eb941,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83,PodSandboxId:23dc5d18c3dc0807477d9547a0daa6f739e871e4c265802929313adae3e0de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721646902871135551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-knmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5934987b-a9ec-4a7d-a446-b8a8c686ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 2cfecdef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ee5f6d8a84cf39ed93d9f97f2aea6af59792efc7ca3d22ab5977f29913d035,PodSandboxId:c0012afb7ace6dfc0e125e1bc98d3a407f3dec5da1c50d5370b3f6b14656f03b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721646902813854844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 629c8fdb-9801-4ad0-857f-22817bc60e17,},Annotations:map[string]string{io.kubernetes.container.hash: bb0ae72c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93,PodSandboxId:4b46d6216eff4c9b44543091fe96c14139cd81c0770b433a22551fe344a361df,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721646890785439732,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksk8n,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67,},Annotations:map[string]string{io.kubernetes.container.hash: c62c8be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe,PodSandboxId:d6ec778d8382dd3a5da66c96229b849b0083e2583bc5368486debe25f57f7f1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721646889110203722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xv25n,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f84e764d-47ca-4634-be5b-aec35a978516,},Annotations:map[string]string{io.kubernetes.container.hash: bd4d1ad3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c,PodSandboxId:93b897045f2a6ca82c86b5cc3bbd2fb8400d28c3c9834c8f83140b1a1b6b1ed4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721646868481747906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ffde10143c677eeb363eba418ccd6135,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f,PodSandboxId:f0b860b9d01a7c38c4623070a44f0570613545a7418d6fbf19fee4d2f5c88092,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721646868469126497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baffe058d2eab14bba7bb69be802823,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1c7963d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e,PodSandboxId:e04d4710c2a668577953d984f03f2056b61183ad103dbea70d9eeb104c69d9c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721646868460717903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2934112736843e2be33b5a75c928eeba,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4,PodSandboxId:fb4864e3abe4138a4230ab20a3945772786a45b82a26234a5f40c68616c368cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721646868427941671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025157,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaad1d5975ac0330d0eca26b6a335dc9,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 908449f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d99cb851-bf43-4ecb-a95a-4590cce1e09e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7c2a3493f614       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   71cbd29654a04       busybox-fc5497c4f-65kqg
	76c5daaeebe61       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   b00a720f793e1       kindnet-ksk8n
	c3322a06af35c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   4ce09d606f177       coredns-7db6d8ff4d-knmjk
	47f89de775577       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   adea1c11ada9b       storage-provisioner
	d20b67e53b9c3       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   fd958ec8bd796       kube-proxy-xv25n
	f120afa0b4168       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   092da20f3dc36       kube-scheduler-multinode-025157
	b0b58aa965a53       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   beaf0d5fae8ca       etcd-multinode-025157
	1025ae107db52       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   c4746e933e221       kube-controller-manager-multinode-025157
	0f533b9177b28       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   149431d9bab72       kube-apiserver-multinode-025157
	e8a0458e53e93       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   41e04960852e7       busybox-fc5497c4f-65kqg
	c6cee19e34e4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   23dc5d18c3dc0       coredns-7db6d8ff4d-knmjk
	c8ee5f6d8a84c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c0012afb7ace6       storage-provisioner
	1fe3af5c01ec9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   4b46d6216eff4       kindnet-ksk8n
	1c87ae4461133       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   d6ec778d8382d       kube-proxy-xv25n
	702ffe223ffbd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   93b897045f2a6       kube-scheduler-multinode-025157
	41200509492ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   f0b860b9d01a7       etcd-multinode-025157
	9fcf31453e06d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   e04d4710c2a66       kube-controller-manager-multinode-025157
	3a756aa97fb8a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   fb4864e3abe41       kube-apiserver-multinode-025157
	
	
	==> coredns [c3322a06af35cfbdbb916f0b20ac7b184a84cefba47094bfa5facfab0ec06735] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54443 - 43872 "HINFO IN 332896090760497034.4179894359598936415. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010483354s
	
	
	==> coredns [c6cee19e34e4b971b582c31e860c234e7332fc662b0a03889f0abe267b33dc83] <==
	[INFO] 10.244.1.2:37567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001974075s
	[INFO] 10.244.1.2:41722 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102124s
	[INFO] 10.244.1.2:52402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080501s
	[INFO] 10.244.1.2:51810 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001293499s
	[INFO] 10.244.1.2:55946 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065316s
	[INFO] 10.244.1.2:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076315s
	[INFO] 10.244.1.2:55302 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057978s
	[INFO] 10.244.0.3:56688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087646s
	[INFO] 10.244.0.3:37771 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010968s
	[INFO] 10.244.0.3:34446 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053275s
	[INFO] 10.244.0.3:58786 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066903s
	[INFO] 10.244.1.2:60707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121612s
	[INFO] 10.244.1.2:36258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101944s
	[INFO] 10.244.1.2:36236 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085878s
	[INFO] 10.244.1.2:45146 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096153s
	[INFO] 10.244.0.3:58546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086945s
	[INFO] 10.244.0.3:36364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109212s
	[INFO] 10.244.0.3:52804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076438s
	[INFO] 10.244.0.3:49762 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067055s
	[INFO] 10.244.1.2:60768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135531s
	[INFO] 10.244.1.2:44434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111812s
	[INFO] 10.244.1.2:50074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122017s
	[INFO] 10.244.1.2:40866 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078556s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-025157
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025157
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=multinode-025157
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_14_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025157
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:21:09 +0000   Mon, 22 Jul 2024 11:15:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    multinode-025157
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fa8e3a447ff48e793af8a35e95c1e84
	  System UUID:                6fa8e3a4-47ff-48e7-93af-8a35e95c1e84
	  Boot ID:                    9c2c6869-d639-4ee9-9aed-fbe6e9f60df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-65kqg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 coredns-7db6d8ff4d-knmjk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-025157                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ksk8n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-025157             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-025157    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-xv25n                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-025157             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-025157 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-025157 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-025157 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-025157 event: Registered Node multinode-025157 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-025157 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-025157 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-025157 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-025157 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-025157 event: Registered Node multinode-025157 in Controller
	
	
	Name:               multinode-025157-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025157-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=multinode-025157
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T11_21_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:21:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025157-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:22:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:23:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:23:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:23:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 11:22:21 +0000   Mon, 22 Jul 2024 11:23:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    multinode-025157-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7973d658f5b44133b42872bf02fb84fd
	  System UUID:                7973d658-f5b4-4133-b428-72bf02fb84fd
	  Boot ID:                    d9b4f628-a5d1-4aed-9450-79a68f15d012
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xp74m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kindnet-5wd8q              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m42s
	  kube-system                 kube-proxy-psdlq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m42s (x2 over 9m42s)  kubelet          Node multinode-025157-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x2 over 9m42s)  kubelet          Node multinode-025157-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x2 over 9m42s)  kubelet          Node multinode-025157-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m24s                  kubelet          Node multinode-025157-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node multinode-025157-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node multinode-025157-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node multinode-025157-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-025157-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-025157-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059845] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056408] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.121955] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.268206] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.124916] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.738105] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.059262] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.509482] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.079680] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.758835] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.782243] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[Jul22 11:15] kauditd_printk_skb: 60 callbacks suppressed
	[ +48.067026] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:21] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.141906] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.176952] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +0.155282] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.272379] systemd-fstab-generator[2858]: Ignoring "noauto" option for root device
	[  +1.895007] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +1.882517] systemd-fstab-generator[3085]: Ignoring "noauto" option for root device
	[  +0.810928] kauditd_printk_skb: 144 callbacks suppressed
	[ +16.804251] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.246350] systemd-fstab-generator[3901]: Ignoring "noauto" option for root device
	[ +17.560638] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [41200509492aeb47fecfd89d606a251df7b021c3320f35b984019521bdc3b59f] <==
	{"level":"info","ts":"2024-07-22T11:15:30.350897Z","caller":"traceutil/trace.go:171","msg":"trace[67404300] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"193.412177ms","start":"2024-07-22T11:15:30.157477Z","end":"2024-07-22T11:15:30.350889Z","steps":["trace[67404300] 'process raft request'  (duration: 193.033467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:15:30.351078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.078248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025157-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-22T11:15:30.351101Z","caller":"traceutil/trace.go:171","msg":"trace[565475713] range","detail":"{range_begin:/registry/minions/multinode-025157-m02; range_end:; response_count:1; response_revision:452; }","duration":"147.184841ms","start":"2024-07-22T11:15:30.20391Z","end":"2024-07-22T11:15:30.351094Z","steps":["trace[565475713] 'agreement among raft nodes before linearized reading'  (duration: 147.039584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:15:38.493632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.825303ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17619648383778651630 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:465 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T11:15:38.493719Z","caller":"traceutil/trace.go:171","msg":"trace[1683772628] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:522; }","duration":"125.597666ms","start":"2024-07-22T11:15:38.36811Z","end":"2024-07-22T11:15:38.493708Z","steps":["trace[1683772628] 'read index received'  (duration: 3.25798ms)","trace[1683772628] 'applied index is now lower than readState.Index'  (duration: 122.338308ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T11:15:38.493805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.689311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025157-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-07-22T11:15:38.493841Z","caller":"traceutil/trace.go:171","msg":"trace[534512718] range","detail":"{range_begin:/registry/minions/multinode-025157-m02; range_end:; response_count:1; response_revision:496; }","duration":"125.748608ms","start":"2024-07-22T11:15:38.368086Z","end":"2024-07-22T11:15:38.493835Z","steps":["trace[534512718] 'agreement among raft nodes before linearized reading'  (duration: 125.661881ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:15:38.494055Z","caller":"traceutil/trace.go:171","msg":"trace[1151554224] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"267.665186ms","start":"2024-07-22T11:15:38.226334Z","end":"2024-07-22T11:15:38.493999Z","steps":["trace[1151554224] 'process raft request'  (duration: 145.089411ms)","trace[1151554224] 'compare'  (duration: 121.109519ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T11:16:19.621159Z","caller":"traceutil/trace.go:171","msg":"trace[1662668731] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"185.865288ms","start":"2024-07-22T11:16:19.435266Z","end":"2024-07-22T11:16:19.621132Z","steps":["trace[1662668731] 'process raft request'  (duration: 185.831104ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:16:19.621411Z","caller":"traceutil/trace.go:171","msg":"trace[290881786] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"232.548282ms","start":"2024-07-22T11:16:19.388853Z","end":"2024-07-22T11:16:19.621401Z","steps":["trace[290881786] 'process raft request'  (duration: 148.815885ms)","trace[290881786] 'compare'  (duration: 83.252455ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T11:16:19.62152Z","caller":"traceutil/trace.go:171","msg":"trace[163786640] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"222.148608ms","start":"2024-07-22T11:16:19.399365Z","end":"2024-07-22T11:16:19.621514Z","steps":["trace[163786640] 'read index received'  (duration: 138.313621ms)","trace[163786640] 'applied index is now lower than readState.Index'  (duration: 83.83448ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T11:16:19.621719Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.29869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-22T11:16:19.621762Z","caller":"traceutil/trace.go:171","msg":"trace[599180933] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:581; }","duration":"222.411891ms","start":"2024-07-22T11:16:19.399345Z","end":"2024-07-22T11:16:19.621756Z","steps":["trace[599180933] 'agreement among raft nodes before linearized reading'  (duration: 222.297194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T11:16:19.621854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.793881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-025157-m03.17e484ce58fcaf6f\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T11:16:19.621891Z","caller":"traceutil/trace.go:171","msg":"trace[488886405] range","detail":"{range_begin:/registry/events/default/multinode-025157-m03.17e484ce58fcaf6f; range_end:; response_count:0; response_revision:581; }","duration":"186.887229ms","start":"2024-07-22T11:16:19.434995Z","end":"2024-07-22T11:16:19.621883Z","steps":["trace[488886405] 'agreement among raft nodes before linearized reading'  (duration: 186.842498ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T11:19:29.852743Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T11:19:29.852869Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-025157","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
	{"level":"warn","ts":"2024-07-22T11:19:29.85297Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.852997Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.861726Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T11:19:29.861816Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T11:19:29.932868Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c2e3bdcd19c3f485","current-leader-member-id":"c2e3bdcd19c3f485"}
	{"level":"info","ts":"2024-07-22T11:19:29.935269Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:19:29.935456Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:19:29.935489Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-025157","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
	
	
	==> etcd [b0b58aa965a534dae9905e00697ae0043257bc3cba127faf4cc2c9785c20ced7] <==
	{"level":"info","ts":"2024-07-22T11:21:07.190435Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T11:21:07.190729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 switched to configuration voters=(14043276751669556357)"}
	{"level":"info","ts":"2024-07-22T11:21:07.190801Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","added-peer-id":"c2e3bdcd19c3f485","added-peer-peer-urls":["https://192.168.39.158:2380"]}
	{"level":"info","ts":"2024-07-22T11:21:07.190938Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:07.190986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:07.205712Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:21:07.205937Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c2e3bdcd19c3f485","initial-advertise-peer-urls":["https://192.168.39.158:2380"],"listen-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.158:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:21:07.209764Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:21:07.209798Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-07-22T11:21:07.205993Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:21:08.534494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgPreVoteResp from c2e3bdcd19c3f485 at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:08.534612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgVoteResp from c2e3bdcd19c3f485 at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.534646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2e3bdcd19c3f485 elected leader c2e3bdcd19c3f485 at term 3"}
	{"level":"info","ts":"2024-07-22T11:21:08.539332Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c2e3bdcd19c3f485","local-member-attributes":"{Name:multinode-025157 ClientURLs:[https://192.168.39.158:2379]}","request-path":"/0/members/c2e3bdcd19c3f485/attributes","cluster-id":"632f2ed81879f448","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:21:08.53945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:08.539492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:21:08.539502Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:21:08.539459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:08.541668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:21:08.541807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.158:2379"}
	{"level":"info","ts":"2024-07-22T11:22:32.336502Z","caller":"traceutil/trace.go:171","msg":"trace[805025319] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"102.559202ms","start":"2024-07-22T11:22:32.233902Z","end":"2024-07-22T11:22:32.336461Z","steps":["trace[805025319] 'process raft request'  (duration: 58.646145ms)","trace[805025319] 'compare'  (duration: 43.385281ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:25:12 up 11 min,  0 users,  load average: 0.08, 0.15, 0.12
	Linux multinode-025157 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1fe3af5c01ec940f21176249de664c57dc58b34ffb6fceac4d35c55646e50b93] <==
	I0722 11:18:41.841239       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:18:51.837461       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:18:51.837526       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:18:51.837678       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:18:51.837704       1 main.go:299] handling current node
	I0722 11:18:51.837729       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:18:51.837748       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:01.842827       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:01.842927       1 main.go:299] handling current node
	I0722 11:19:01.842965       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:01.842971       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:01.843179       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:01.843202       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:19:11.840470       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:11.840649       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	I0722 11:19:11.840825       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:11.840851       1 main.go:299] handling current node
	I0722 11:19:11.840872       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:11.840890       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:21.842534       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:19:21.842592       1 main.go:299] handling current node
	I0722 11:19:21.842614       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:19:21.842620       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:19:21.842768       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0722 11:19:21.842793       1 main.go:322] Node multinode-025157-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [76c5daaeebe619d6f32b9f54bb15d461d2c263a24fc0e8e5162dc012448c052b] <==
	I0722 11:24:11.745815       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:24:21.752408       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:24:21.752469       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:24:21.752601       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:24:21.752609       1 main.go:299] handling current node
	I0722 11:24:31.752171       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:24:31.752310       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:24:31.752494       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:24:31.752531       1 main.go:299] handling current node
	I0722 11:24:41.747675       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:24:41.747773       1 main.go:299] handling current node
	I0722 11:24:41.747804       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:24:41.747813       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:24:51.748473       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:24:51.748546       1 main.go:299] handling current node
	I0722 11:24:51.748575       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:24:51.748580       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:25:01.754156       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:25:01.754350       1 main.go:299] handling current node
	I0722 11:25:01.754400       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:25:01.754429       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	I0722 11:25:11.745086       1 main.go:295] Handling node with IPs: map[192.168.39.158:{}]
	I0722 11:25:11.745129       1 main.go:299] handling current node
	I0722 11:25:11.745144       1 main.go:295] Handling node with IPs: map[192.168.39.155:{}]
	I0722 11:25:11.745149       1 main.go:322] Node multinode-025157-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0f533b9177b28fa7396cc8802a8a2c574a481880d21c9fdaddedf8efe1bda20f] <==
	I0722 11:21:09.846915       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 11:21:09.847046       1 policy_source.go:224] refreshing policies
	I0722 11:21:09.864918       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 11:21:09.864983       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 11:21:09.866329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 11:21:09.867532       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 11:21:09.868105       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 11:21:09.876198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 11:21:09.877683       1 shared_informer.go:320] Caches are synced for configmaps
	E0722 11:21:09.882790       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 11:21:09.885270       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 11:21:09.903135       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 11:21:09.903249       1 aggregator.go:165] initial CRD sync complete...
	I0722 11:21:09.903294       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 11:21:09.903318       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 11:21:09.903340       1 cache.go:39] Caches are synced for autoregister controller
	I0722 11:21:09.936729       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 11:21:10.785694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:21:11.930002       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 11:21:12.049892       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 11:21:12.065775       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 11:21:12.128747       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:21:12.136277       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:21:23.148107       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 11:21:23.347849       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3a756aa97fb8a81312a36b2d014bf2619ac5d5b2c1ce5367fbec568e435b04f4] <==
	I0722 11:14:31.786684       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0722 11:14:31.791647       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0722 11:14:31.791675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:14:32.300382       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:14:32.341172       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:14:32.387691       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0722 11:14:32.395821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.158]
	I0722 11:14:32.396547       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 11:14:32.400359       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 11:14:32.866678       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 11:14:33.306718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 11:14:33.327162       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 11:14:33.360472       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 11:14:46.868654       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0722 11:14:46.987355       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0722 11:15:52.974761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39490: use of closed network connection
	E0722 11:15:53.140919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39506: use of closed network connection
	E0722 11:15:53.316187       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39522: use of closed network connection
	E0722 11:15:53.486605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39550: use of closed network connection
	E0722 11:15:53.651403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39560: use of closed network connection
	E0722 11:15:54.093451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39600: use of closed network connection
	E0722 11:15:54.267177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39628: use of closed network connection
	E0722 11:15:54.434672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39640: use of closed network connection
	E0722 11:15:54.606159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:39650: use of closed network connection
	I0722 11:19:29.860609       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [1025ae107db528537896672e610b73744c367193dcdad9c8c334f88701990658] <==
	I0722 11:21:51.192930       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m02" podCIDRs=["10.244.1.0/24"]
	I0722 11:21:52.800107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.438µs"
	I0722 11:21:53.081865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.638µs"
	I0722 11:21:53.106690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.443µs"
	I0722 11:21:53.114598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.48µs"
	I0722 11:21:53.128279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.43µs"
	I0722 11:21:53.135328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.475µs"
	I0722 11:21:53.137706       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.68µs"
	I0722 11:22:08.977510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:08.995687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.682µs"
	I0722 11:22:09.007155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.856µs"
	I0722 11:22:10.426087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.531976ms"
	I0722 11:22:10.426310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.464µs"
	I0722 11:22:27.012872       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:28.064188       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:28.064939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:22:28.073125       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.2.0/24"]
	I0722 11:22:45.799714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:22:51.200344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:23:33.170707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.014142ms"
	I0722 11:23:33.172442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.446µs"
	I0722 11:24:03.032907       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4n82n"
	I0722 11:24:03.056227       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4n82n"
	I0722 11:24:03.056316       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zgpkm"
	I0722 11:24:03.076531       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zgpkm"
	
	
	==> kube-controller-manager [9fcf31453e06da42b109eb535b6676cbf599610361eb902a8727895311e88d1e] <==
	I0722 11:15:30.358770       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m02\" does not exist"
	I0722 11:15:30.390990       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m02" podCIDRs=["10.244.1.0/24"]
	I0722 11:15:31.094391       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025157-m02"
	I0722 11:15:48.177060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:15:50.412369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.480398ms"
	I0722 11:15:50.444749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.301494ms"
	I0722 11:15:50.460866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.005329ms"
	I0722 11:15:50.460968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.858µs"
	I0722 11:15:52.184169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.748566ms"
	I0722 11:15:52.185069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.984µs"
	I0722 11:15:52.575296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.793352ms"
	I0722 11:15:52.576157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.324µs"
	I0722 11:16:19.624810       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:16:19.625610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:16:19.688728       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.2.0/24"]
	I0722 11:16:21.115704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025157-m03"
	I0722 11:16:37.585069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:05.550243       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:06.659341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:17:06.659462       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025157-m03\" does not exist"
	I0722 11:17:06.690359       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025157-m03" podCIDRs=["10.244.3.0/24"]
	I0722 11:17:24.339545       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m02"
	I0722 11:18:06.167967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025157-m03"
	I0722 11:18:06.208510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.418819ms"
	I0722 11:18:06.209328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.342µs"
	
	
	==> kube-proxy [1c87ae44611335d3e254a269ca9843e61acca277b61c3cfd5a7ee389eab0a0fe] <==
	I0722 11:14:49.251413       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:14:49.262542       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	I0722 11:14:49.298682       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:14:49.298769       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:14:49.298787       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:14:49.301415       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:14:49.301612       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:14:49.301806       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:14:49.303349       1 config.go:192] "Starting service config controller"
	I0722 11:14:49.303906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:14:49.311095       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:14:49.303704       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:14:49.311305       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:14:49.311327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:14:49.304694       1 config.go:319] "Starting node config controller"
	I0722 11:14:49.311442       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:14:49.311447       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d20b67e53b9c38207a87f3b84f23b9922150eb375e18e1209254475066746763] <==
	I0722 11:21:10.780689       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:21:10.799311       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	I0722 11:21:10.879730       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:21:10.879834       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:21:10.879855       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:21:10.886910       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:21:10.887202       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:21:10.887230       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:21:10.891165       1 config.go:192] "Starting service config controller"
	I0722 11:21:10.891195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:21:10.891220       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:21:10.891224       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:21:10.891567       1 config.go:319] "Starting node config controller"
	I0722 11:21:10.891596       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:21:10.991354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:21:10.991422       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:21:10.992122       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [702ffe223ffbdbd26bcfbedcee8d508cd00c9fcdee81c7aa12ffab5e3cde854c] <==
	E0722 11:14:30.902870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 11:14:30.902929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:14:30.902954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:14:31.784616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:14:31.784664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 11:14:31.819176       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:14:31.819224       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:14:31.836267       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:14:31.836309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 11:14:31.890391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:14:31.890418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:14:31.902208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:14:31.902232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:14:31.934503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:14:31.934544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 11:14:32.030813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:14:32.030855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:14:32.048468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:14:32.048550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:14:32.055883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:14:32.056057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:14:32.111970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:14:32.112260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0722 11:14:33.893092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 11:19:29.851795       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f120afa0b416866b5719621806acc8acb202ebba2b96a4f775ec38c8f35b3bed] <==
	I0722 11:21:07.855570       1 serving.go:380] Generated self-signed cert in-memory
	W0722 11:21:09.863734       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 11:21:09.863836       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:21:09.863847       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 11:21:09.863856       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 11:21:09.888283       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 11:21:09.888394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:21:09.890644       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 11:21:09.890683       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 11:21:09.891344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 11:21:09.891411       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 11:21:09.991262       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046583    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67-cni-cfg\") pod \"kindnet-ksk8n\" (UID: \"6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67\") " pod="kube-system/kindnet-ksk8n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046624    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84e764d-47ca-4634-be5b-aec35a978516-lib-modules\") pod \"kube-proxy-xv25n\" (UID: \"f84e764d-47ca-4634-be5b-aec35a978516\") " pod="kube-system/kube-proxy-xv25n"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046688    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/629c8fdb-9801-4ad0-857f-22817bc60e17-tmp\") pod \"storage-provisioner\" (UID: \"629c8fdb-9801-4ad0-857f-22817bc60e17\") " pod="kube-system/storage-provisioner"
	Jul 22 11:21:10 multinode-025157 kubelet[3092]: I0722 11:21:10.046730    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67-xtables-lock\") pod \"kindnet-ksk8n\" (UID: \"6c01f5e7-c64e-48ad-9c0e-7fefdbc0de67\") " pod="kube-system/kindnet-ksk8n"
	Jul 22 11:21:14 multinode-025157 kubelet[3092]: I0722 11:21:14.848678    3092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 22 11:22:06 multinode-025157 kubelet[3092]: E0722 11:22:06.036147    3092 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:22:06 multinode-025157 kubelet[3092]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 11:23:06 multinode-025157 kubelet[3092]: E0722 11:23:06.040225    3092 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:23:06 multinode-025157 kubelet[3092]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:23:06 multinode-025157 kubelet[3092]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:23:06 multinode-025157 kubelet[3092]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:23:06 multinode-025157 kubelet[3092]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 11:24:06 multinode-025157 kubelet[3092]: E0722 11:24:06.037573    3092 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:24:06 multinode-025157 kubelet[3092]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:24:06 multinode-025157 kubelet[3092]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:24:06 multinode-025157 kubelet[3092]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:24:06 multinode-025157 kubelet[3092]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 11:25:06 multinode-025157 kubelet[3092]: E0722 11:25:06.036978    3092 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:25:06 multinode-025157 kubelet[3092]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:25:06 multinode-025157 kubelet[3092]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:25:06 multinode-025157 kubelet[3092]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:25:06 multinode-025157 kubelet[3092]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:25:12.136448   44442 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19313-5960/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-025157 -n multinode-025157
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025157 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                    
x
+
TestPreload (275.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-639195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-639195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.472233676s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-639195 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-639195 image pull gcr.io/k8s-minikube/busybox: (1.100273363s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-639195
E0722 11:31:19.659346   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 11:31:36.611565   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-639195: exit status 82 (2m0.454888834s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-639195"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-639195 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-22 11:33:15.55435986 +0000 UTC m=+3866.281774211
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-639195 -n test-preload-639195
E0722 11:33:29.088566   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-639195 -n test-preload-639195: exit status 3 (18.517656956s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:33:34.068746   47369 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0722 11:33:34.068769   47369 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-639195" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-639195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-639195
--- FAIL: TestPreload (275.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (474s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.912140456s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-651148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-651148" primary control-plane node in "kubernetes-upgrade-651148" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:37:02.741129   49739 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:37:02.741373   49739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:37:02.741383   49739 out.go:304] Setting ErrFile to fd 2...
	I0722 11:37:02.741388   49739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:37:02.741635   49739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:37:02.742205   49739 out.go:298] Setting JSON to false
	I0722 11:37:02.743188   49739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4775,"bootTime":1721643448,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:37:02.743247   49739 start.go:139] virtualization: kvm guest
	I0722 11:37:02.745194   49739 out.go:177] * [kubernetes-upgrade-651148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:37:02.746901   49739 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:37:02.746896   49739 notify.go:220] Checking for updates...
	I0722 11:37:02.748326   49739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:37:02.749631   49739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:37:02.751038   49739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:37:02.752321   49739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:37:02.753795   49739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:37:02.755645   49739 config.go:182] Loaded profile config "NoKubernetes-543094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:37:02.755852   49739 config.go:182] Loaded profile config "pause-812059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:37:02.755984   49739 config.go:182] Loaded profile config "running-upgrade-555273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0722 11:37:02.756102   49739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:37:02.798674   49739 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 11:37:02.800100   49739 start.go:297] selected driver: kvm2
	I0722 11:37:02.800126   49739 start.go:901] validating driver "kvm2" against <nil>
	I0722 11:37:02.800150   49739 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:37:02.801236   49739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:37:02.801339   49739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:37:02.817685   49739 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:37:02.817725   49739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 11:37:02.817943   49739 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 11:37:02.817998   49739 cni.go:84] Creating CNI manager for ""
	I0722 11:37:02.818014   49739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:37:02.818024   49739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 11:37:02.818085   49739 start.go:340] cluster config:
	{Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:37:02.818180   49739 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:37:02.819458   49739 out.go:177] * Starting "kubernetes-upgrade-651148" primary control-plane node in "kubernetes-upgrade-651148" cluster
	I0722 11:37:02.820486   49739 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:37:02.820527   49739 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 11:37:02.820538   49739 cache.go:56] Caching tarball of preloaded images
	I0722 11:37:02.820620   49739 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:37:02.820632   49739 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 11:37:02.820718   49739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/config.json ...
	I0722 11:37:02.820735   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/config.json: {Name:mk5174103e98b22665d02315c69d231371ac4639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:37:02.820852   49739 start.go:360] acquireMachinesLock for kubernetes-upgrade-651148: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:37:29.810656   49739 start.go:364] duration metric: took 26.989772139s to acquireMachinesLock for "kubernetes-upgrade-651148"
	I0722 11:37:29.810719   49739 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:37:29.810855   49739 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 11:37:29.812424   49739 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 11:37:29.812647   49739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:37:29.812697   49739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:37:29.829661   49739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0722 11:37:29.830133   49739 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:37:29.830580   49739 main.go:141] libmachine: Using API Version  1
	I0722 11:37:29.830604   49739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:37:29.830937   49739 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:37:29.831089   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetMachineName
	I0722 11:37:29.831296   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:29.831419   49739 start.go:159] libmachine.API.Create for "kubernetes-upgrade-651148" (driver="kvm2")
	I0722 11:37:29.831450   49739 client.go:168] LocalClient.Create starting
	I0722 11:37:29.831482   49739 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 11:37:29.831528   49739 main.go:141] libmachine: Decoding PEM data...
	I0722 11:37:29.831544   49739 main.go:141] libmachine: Parsing certificate...
	I0722 11:37:29.831600   49739 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 11:37:29.831620   49739 main.go:141] libmachine: Decoding PEM data...
	I0722 11:37:29.831633   49739 main.go:141] libmachine: Parsing certificate...
	I0722 11:37:29.831657   49739 main.go:141] libmachine: Running pre-create checks...
	I0722 11:37:29.831665   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .PreCreateCheck
	I0722 11:37:29.832043   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetConfigRaw
	I0722 11:37:29.832483   49739 main.go:141] libmachine: Creating machine...
	I0722 11:37:29.832498   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .Create
	I0722 11:37:29.832629   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Creating KVM machine...
	I0722 11:37:29.833928   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found existing default KVM network
	I0722 11:37:29.835375   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:29.835214   50136 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001a7f50}
	I0722 11:37:29.835443   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | created network xml: 
	I0722 11:37:29.835470   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | <network>
	I0722 11:37:29.835487   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   <name>mk-kubernetes-upgrade-651148</name>
	I0722 11:37:29.835500   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   <dns enable='no'/>
	I0722 11:37:29.835511   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   
	I0722 11:37:29.835522   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 11:37:29.835535   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |     <dhcp>
	I0722 11:37:29.835545   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 11:37:29.835556   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |     </dhcp>
	I0722 11:37:29.835568   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   </ip>
	I0722 11:37:29.835580   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG |   
	I0722 11:37:29.835591   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | </network>
	I0722 11:37:29.835603   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | 
	I0722 11:37:29.840423   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | trying to create private KVM network mk-kubernetes-upgrade-651148 192.168.39.0/24...
	I0722 11:37:29.930054   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | private KVM network mk-kubernetes-upgrade-651148 192.168.39.0/24 created
	I0722 11:37:29.930217   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148 ...
	I0722 11:37:29.930255   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 11:37:29.930287   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:29.930167   50136 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:37:29.930314   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 11:37:30.182238   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:30.182081   50136 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa...
	I0722 11:37:30.360400   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:30.360273   50136 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/kubernetes-upgrade-651148.rawdisk...
	I0722 11:37:30.360434   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Writing magic tar header
	I0722 11:37:30.360452   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Writing SSH key tar header
	I0722 11:37:30.360470   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:30.360423   50136 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148 ...
	I0722 11:37:30.360595   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148
	I0722 11:37:30.360648   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 11:37:30.360665   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148 (perms=drwx------)
	I0722 11:37:30.360688   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 11:37:30.360708   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 11:37:30.360724   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:37:30.360741   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 11:37:30.360754   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 11:37:30.360768   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home/jenkins
	I0722 11:37:30.360780   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Checking permissions on dir: /home
	I0722 11:37:30.360795   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 11:37:30.360807   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Skipping /home - not owner
	I0722 11:37:30.360826   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 11:37:30.360840   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 11:37:30.360865   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Creating domain...
	I0722 11:37:30.361980   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) define libvirt domain using xml: 
	I0722 11:37:30.362003   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) <domain type='kvm'>
	I0722 11:37:30.362015   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <name>kubernetes-upgrade-651148</name>
	I0722 11:37:30.362023   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <memory unit='MiB'>2200</memory>
	I0722 11:37:30.362037   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <vcpu>2</vcpu>
	I0722 11:37:30.362045   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <features>
	I0722 11:37:30.362054   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <acpi/>
	I0722 11:37:30.362061   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <apic/>
	I0722 11:37:30.362069   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <pae/>
	I0722 11:37:30.362093   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     
	I0722 11:37:30.362107   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   </features>
	I0722 11:37:30.362129   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <cpu mode='host-passthrough'>
	I0722 11:37:30.362141   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   
	I0722 11:37:30.362157   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   </cpu>
	I0722 11:37:30.362169   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <os>
	I0722 11:37:30.362179   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <type>hvm</type>
	I0722 11:37:30.362189   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <boot dev='cdrom'/>
	I0722 11:37:30.362197   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <boot dev='hd'/>
	I0722 11:37:30.362218   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <bootmenu enable='no'/>
	I0722 11:37:30.362231   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   </os>
	I0722 11:37:30.362244   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   <devices>
	I0722 11:37:30.362265   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <disk type='file' device='cdrom'>
	I0722 11:37:30.362284   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/boot2docker.iso'/>
	I0722 11:37:30.362297   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <target dev='hdc' bus='scsi'/>
	I0722 11:37:30.362307   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <readonly/>
	I0722 11:37:30.362316   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </disk>
	I0722 11:37:30.362327   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <disk type='file' device='disk'>
	I0722 11:37:30.362337   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 11:37:30.362350   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/kubernetes-upgrade-651148.rawdisk'/>
	I0722 11:37:30.362358   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <target dev='hda' bus='virtio'/>
	I0722 11:37:30.362365   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </disk>
	I0722 11:37:30.362372   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <interface type='network'>
	I0722 11:37:30.362383   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <source network='mk-kubernetes-upgrade-651148'/>
	I0722 11:37:30.362390   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <model type='virtio'/>
	I0722 11:37:30.362398   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </interface>
	I0722 11:37:30.362415   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <interface type='network'>
	I0722 11:37:30.362425   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <source network='default'/>
	I0722 11:37:30.362432   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <model type='virtio'/>
	I0722 11:37:30.362440   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </interface>
	I0722 11:37:30.362447   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <serial type='pty'>
	I0722 11:37:30.362455   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <target port='0'/>
	I0722 11:37:30.362462   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </serial>
	I0722 11:37:30.362471   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <console type='pty'>
	I0722 11:37:30.362482   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <target type='serial' port='0'/>
	I0722 11:37:30.362492   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </console>
	I0722 11:37:30.362502   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     <rng model='virtio'>
	I0722 11:37:30.362514   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)       <backend model='random'>/dev/random</backend>
	I0722 11:37:30.362525   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     </rng>
	I0722 11:37:30.362535   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     
	I0722 11:37:30.362545   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)     
	I0722 11:37:30.362563   49739 main.go:141] libmachine: (kubernetes-upgrade-651148)   </devices>
	I0722 11:37:30.362583   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) </domain>
	I0722 11:37:30.362598   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) 
	I0722 11:37:30.370575   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:be:60:80 in network default
	I0722 11:37:30.371325   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Ensuring networks are active...
	I0722 11:37:30.371348   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:30.372021   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Ensuring network default is active
	I0722 11:37:30.372446   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Ensuring network mk-kubernetes-upgrade-651148 is active
	I0722 11:37:30.373073   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Getting domain xml...
	I0722 11:37:30.373807   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Creating domain...
	I0722 11:37:31.741855   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Waiting to get IP...
	I0722 11:37:31.742829   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:31.743753   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:31.743790   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:31.743731   50136 retry.go:31] will retry after 238.903923ms: waiting for machine to come up
	I0722 11:37:31.984000   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:31.984605   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:31.984636   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:31.984572   50136 retry.go:31] will retry after 313.893165ms: waiting for machine to come up
	I0722 11:37:32.300346   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:32.301152   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:32.301184   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:32.301069   50136 retry.go:31] will retry after 459.30673ms: waiting for machine to come up
	I0722 11:37:32.761878   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:32.761958   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:32.762000   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:32.761922   50136 retry.go:31] will retry after 383.062085ms: waiting for machine to come up
	I0722 11:37:33.146464   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:33.147098   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:33.147128   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:33.147042   50136 retry.go:31] will retry after 759.134762ms: waiting for machine to come up
	I0722 11:37:33.907686   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:33.908211   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:33.908241   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:33.908186   50136 retry.go:31] will retry after 642.310561ms: waiting for machine to come up
	I0722 11:37:34.552540   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:34.553029   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:34.553068   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:34.552978   50136 retry.go:31] will retry after 849.706789ms: waiting for machine to come up
	I0722 11:37:35.404490   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:35.404930   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:35.404956   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:35.404896   50136 retry.go:31] will retry after 1.018075562s: waiting for machine to come up
	I0722 11:37:36.425277   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:36.425814   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:36.425839   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:36.425753   50136 retry.go:31] will retry after 1.823066458s: waiting for machine to come up
	I0722 11:37:38.250029   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:38.250527   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:38.250549   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:38.250486   50136 retry.go:31] will retry after 1.74928449s: waiting for machine to come up
	I0722 11:37:40.001359   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:40.001899   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:40.001925   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:40.001842   50136 retry.go:31] will retry after 2.414570713s: waiting for machine to come up
	I0722 11:37:42.417711   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:42.418180   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:42.418204   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:42.418150   50136 retry.go:31] will retry after 3.156966243s: waiting for machine to come up
	I0722 11:37:45.576484   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:45.577020   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:45.577041   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:45.576971   50136 retry.go:31] will retry after 2.885346645s: waiting for machine to come up
	I0722 11:37:48.465761   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:48.466167   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find current IP address of domain kubernetes-upgrade-651148 in network mk-kubernetes-upgrade-651148
	I0722 11:37:48.466188   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | I0722 11:37:48.466119   50136 retry.go:31] will retry after 4.810408676s: waiting for machine to come up
	I0722 11:37:53.278533   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.279005   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Found IP for machine: 192.168.39.123
	I0722 11:37:53.279035   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Reserving static IP address...
	I0722 11:37:53.279062   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has current primary IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.279490   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-651148", mac: "52:54:00:61:7d:ff", ip: "192.168.39.123"} in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.353828   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Getting to WaitForSSH function...
	I0722 11:37:53.353856   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Reserved static IP address: 192.168.39.123
	I0722 11:37:53.353903   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Waiting for SSH to be available...
	I0722 11:37:53.356685   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.357094   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:53.357131   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.357192   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Using SSH client type: external
	I0722 11:37:53.357248   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa (-rw-------)
	I0722 11:37:53.357295   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:37:53.357318   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | About to run SSH command:
	I0722 11:37:53.357337   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | exit 0
	I0722 11:37:53.488697   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | SSH cmd err, output: <nil>: 
	I0722 11:37:53.488985   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) KVM machine creation complete!
	I0722 11:37:53.489262   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetConfigRaw
	I0722 11:37:53.489761   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:53.489992   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:53.490178   49739 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 11:37:53.490194   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetState
	I0722 11:37:53.491541   49739 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 11:37:53.491555   49739 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 11:37:53.491560   49739 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 11:37:53.491566   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:53.494060   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.494434   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:53.494466   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.494602   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:53.494797   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.494933   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.495066   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:53.495246   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:53.495495   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:53.495510   49739 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 11:37:53.607620   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:37:53.607645   49739 main.go:141] libmachine: Detecting the provisioner...
	I0722 11:37:53.607659   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:53.610549   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.610932   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:53.610957   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.611118   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:53.611307   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.611472   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.611617   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:53.611760   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:53.611933   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:53.611942   49739 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 11:37:53.729598   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 11:37:53.729686   49739 main.go:141] libmachine: found compatible host: buildroot
	I0722 11:37:53.729699   49739 main.go:141] libmachine: Provisioning with buildroot...
	I0722 11:37:53.729708   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetMachineName
	I0722 11:37:53.729938   49739 buildroot.go:166] provisioning hostname "kubernetes-upgrade-651148"
	I0722 11:37:53.729956   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetMachineName
	I0722 11:37:53.730195   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:53.732647   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.732957   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:53.732987   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.733137   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:53.733302   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.733426   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.733592   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:53.733793   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:53.733966   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:53.733982   49739 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-651148 && echo "kubernetes-upgrade-651148" | sudo tee /etc/hostname
	I0722 11:37:53.867086   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-651148
	
	I0722 11:37:53.867120   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:53.870337   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.870753   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:53.870785   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:53.870942   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:53.871122   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.871261   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:53.871423   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:53.871630   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:53.871858   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:53.871884   49739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-651148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-651148/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-651148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:37:53.999428   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:37:53.999461   49739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:37:53.999530   49739 buildroot.go:174] setting up certificates
	I0722 11:37:53.999550   49739 provision.go:84] configureAuth start
	I0722 11:37:53.999570   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetMachineName
	I0722 11:37:53.999873   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetIP
	I0722 11:37:54.002877   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.003293   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.003322   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.003430   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.005901   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.006294   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.006323   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.006412   49739 provision.go:143] copyHostCerts
	I0722 11:37:54.006473   49739 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:37:54.006488   49739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:37:54.006548   49739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:37:54.006659   49739 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:37:54.006670   49739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:37:54.006700   49739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:37:54.006770   49739 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:37:54.006779   49739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:37:54.006803   49739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:37:54.006861   49739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-651148 san=[127.0.0.1 192.168.39.123 kubernetes-upgrade-651148 localhost minikube]
	I0722 11:37:54.194617   49739 provision.go:177] copyRemoteCerts
	I0722 11:37:54.194669   49739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:37:54.194691   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.197417   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.197756   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.197786   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.198018   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.198197   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.198364   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.198480   49739 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa Username:docker}
	I0722 11:37:54.288337   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:37:54.318450   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0722 11:37:54.344013   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:37:54.367944   49739 provision.go:87] duration metric: took 368.378708ms to configureAuth
	I0722 11:37:54.367972   49739 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:37:54.368171   49739 config.go:182] Loaded profile config "kubernetes-upgrade-651148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:37:54.368261   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.370649   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.370979   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.371010   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.371267   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.371497   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.371659   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.371828   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.372015   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:54.372240   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:54.372263   49739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:37:54.643442   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:37:54.643470   49739 main.go:141] libmachine: Checking connection to Docker...
	I0722 11:37:54.643481   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetURL
	I0722 11:37:54.644804   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | Using libvirt version 6000000
	I0722 11:37:54.647268   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.647592   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.647621   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.647781   49739 main.go:141] libmachine: Docker is up and running!
	I0722 11:37:54.647792   49739 main.go:141] libmachine: Reticulating splines...
	I0722 11:37:54.647798   49739 client.go:171] duration metric: took 24.816341996s to LocalClient.Create
	I0722 11:37:54.647818   49739 start.go:167] duration metric: took 24.816400877s to libmachine.API.Create "kubernetes-upgrade-651148"
	I0722 11:37:54.647831   49739 start.go:293] postStartSetup for "kubernetes-upgrade-651148" (driver="kvm2")
	I0722 11:37:54.647843   49739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:37:54.647873   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:54.648115   49739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:37:54.648133   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.650517   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.650812   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.650827   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.651167   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.651359   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.651505   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.651667   49739 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa Username:docker}
	I0722 11:37:54.742992   49739 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:37:54.747502   49739 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:37:54.747527   49739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:37:54.747590   49739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:37:54.747675   49739 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:37:54.747786   49739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:37:54.757812   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:37:54.782813   49739 start.go:296] duration metric: took 134.952391ms for postStartSetup
	I0722 11:37:54.782854   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetConfigRaw
	I0722 11:37:54.783403   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetIP
	I0722 11:37:54.785925   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.786244   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.786275   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.786493   49739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/config.json ...
	I0722 11:37:54.786703   49739 start.go:128] duration metric: took 24.975834639s to createHost
	I0722 11:37:54.786726   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.789047   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.789378   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.789397   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.789555   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.789709   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.789960   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.790093   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.790295   49739 main.go:141] libmachine: Using SSH client type: native
	I0722 11:37:54.790501   49739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0722 11:37:54.790529   49739 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 11:37:54.905970   49739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721648274.878077525
	
	I0722 11:37:54.905992   49739 fix.go:216] guest clock: 1721648274.878077525
	I0722 11:37:54.905999   49739 fix.go:229] Guest: 2024-07-22 11:37:54.878077525 +0000 UTC Remote: 2024-07-22 11:37:54.78671617 +0000 UTC m=+52.079368795 (delta=91.361355ms)
	I0722 11:37:54.906017   49739 fix.go:200] guest clock delta is within tolerance: 91.361355ms
	I0722 11:37:54.906021   49739 start.go:83] releasing machines lock for "kubernetes-upgrade-651148", held for 25.095334236s
	I0722 11:37:54.906042   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:54.906314   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetIP
	I0722 11:37:54.909035   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.909505   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.909536   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.909752   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:54.910296   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:54.910494   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .DriverName
	I0722 11:37:54.910576   49739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:37:54.910628   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.910732   49739 ssh_runner.go:195] Run: cat /version.json
	I0722 11:37:54.910761   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHHostname
	I0722 11:37:54.913956   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.914014   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.914101   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.914131   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.914421   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.914532   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:54.914561   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:54.914633   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.914852   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHPort
	I0722 11:37:54.915002   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHKeyPath
	I0722 11:37:54.915016   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.915213   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetSSHUsername
	I0722 11:37:54.915229   49739 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa Username:docker}
	I0722 11:37:54.915366   49739 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/kubernetes-upgrade-651148/id_rsa Username:docker}
	I0722 11:37:55.005490   49739 ssh_runner.go:195] Run: systemctl --version
	I0722 11:37:55.032190   49739 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:37:55.190827   49739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:37:55.197613   49739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:37:55.197681   49739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:37:55.216130   49739 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:37:55.216158   49739 start.go:495] detecting cgroup driver to use...
	I0722 11:37:55.216216   49739 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:37:55.232743   49739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:37:55.248494   49739 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:37:55.248558   49739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:37:55.263212   49739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:37:55.277074   49739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:37:55.399691   49739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:37:55.576964   49739 docker.go:233] disabling docker service ...
	I0722 11:37:55.577054   49739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:37:55.592779   49739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:37:55.605977   49739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:37:55.754643   49739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:37:55.891534   49739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:37:55.907688   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:37:55.927927   49739 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:37:55.927994   49739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:37:55.938530   49739 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:37:55.938593   49739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:37:55.949879   49739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:37:55.964128   49739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:37:55.976168   49739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:37:55.989087   49739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:37:55.998694   49739 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:37:55.998760   49739 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:37:56.013364   49739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:37:56.023931   49739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:37:56.144854   49739 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:37:56.280613   49739 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:37:56.280715   49739 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:37:56.285511   49739 start.go:563] Will wait 60s for crictl version
	I0722 11:37:56.285566   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:37:56.289753   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:37:56.337414   49739 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:37:56.337522   49739 ssh_runner.go:195] Run: crio --version
	I0722 11:37:56.376847   49739 ssh_runner.go:195] Run: crio --version
	I0722 11:37:56.415775   49739 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:37:56.416953   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetIP
	I0722 11:37:56.419838   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:56.420314   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:37:45 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:37:56.420344   49739 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:37:56.420656   49739 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:37:56.425534   49739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:37:56.438411   49739 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:37:56.438532   49739 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:37:56.438605   49739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:37:56.475967   49739 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:37:56.476047   49739 ssh_runner.go:195] Run: which lz4
	I0722 11:37:56.479995   49739 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 11:37:56.484553   49739 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:37:56.484585   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:37:58.194489   49739 crio.go:462] duration metric: took 1.714533337s to copy over tarball
	I0722 11:37:58.194567   49739 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:38:00.782504   49739 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.587907011s)
	I0722 11:38:00.782538   49739 crio.go:469] duration metric: took 2.588024855s to extract the tarball
	I0722 11:38:00.782547   49739 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:38:00.827505   49739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:38:00.882765   49739 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:38:00.882794   49739 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:38:00.882869   49739 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:38:00.883131   49739 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:38:00.883273   49739 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:38:00.883382   49739 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:38:00.883490   49739 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:38:00.883610   49739 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:38:00.883737   49739 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:38:00.883848   49739 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:38:00.885357   49739 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:38:00.885635   49739 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:38:00.885676   49739 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:38:00.885759   49739 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:38:00.885787   49739 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:38:00.885875   49739 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:38:00.885898   49739 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:38:00.885949   49739 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:38:01.064067   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:38:01.074403   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:38:01.083435   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:38:01.084878   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:38:01.085117   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:38:01.092356   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:38:01.134139   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:38:01.173285   49739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:38:01.202242   49739 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:38:01.202288   49739 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:38:01.202335   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.240374   49739 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:38:01.240432   49739 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:38:01.240503   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.274480   49739 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:38:01.274524   49739 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:38:01.274575   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.297380   49739 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:38:01.297422   49739 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:38:01.297468   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.328043   49739 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:38:01.328092   49739 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:38:01.328131   49739 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:38:01.328178   49739 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:38:01.328204   49739 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:38:01.328222   49739 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:38:01.328251   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.328223   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.328145   49739 ssh_runner.go:195] Run: which crictl
	I0722 11:38:01.470692   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:38:01.470789   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:38:01.470692   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:38:01.470743   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:38:01.470749   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:38:01.470922   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:38:01.470960   49739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:38:01.674449   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:38:01.674581   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:38:01.674648   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:38:01.674743   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:38:01.674776   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:38:01.674802   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:38:01.674836   49739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:38:01.674878   49739 cache_images.go:92] duration metric: took 792.066784ms to LoadCachedImages
	W0722 11:38:01.674973   49739 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0722 11:38:01.674991   49739 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.20.0 crio true true} ...
	I0722 11:38:01.675083   49739 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-651148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:38:01.675174   49739 ssh_runner.go:195] Run: crio config
	I0722 11:38:01.743513   49739 cni.go:84] Creating CNI manager for ""
	I0722 11:38:01.743533   49739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:38:01.743541   49739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:38:01.743558   49739 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-651148 NodeName:kubernetes-upgrade-651148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:38:01.743711   49739 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-651148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:38:01.743783   49739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:38:01.756322   49739 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:38:01.756407   49739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:38:01.768989   49739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0722 11:38:01.789817   49739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:38:01.810434   49739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0722 11:38:01.836237   49739 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0722 11:38:01.842541   49739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:38:01.859078   49739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:38:02.023716   49739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:38:02.049691   49739 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148 for IP: 192.168.39.123
	I0722 11:38:02.049712   49739 certs.go:194] generating shared ca certs ...
	I0722 11:38:02.049733   49739 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.049909   49739 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:38:02.049993   49739 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:38:02.050008   49739 certs.go:256] generating profile certs ...
	I0722 11:38:02.050078   49739 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.key
	I0722 11:38:02.050098   49739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.crt with IP's: []
	I0722 11:38:02.232221   49739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.crt ...
	I0722 11:38:02.232264   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.crt: {Name:mk96573ad9eafeaea86c54a20bd9db01720edd03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.232515   49739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.key ...
	I0722 11:38:02.232542   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.key: {Name:mk226fb6d6dd447776fa4a8e27b1a1c98f4cbdb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.232679   49739 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key.983df103
	I0722 11:38:02.232706   49739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt.983df103 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I0722 11:38:02.535150   49739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt.983df103 ...
	I0722 11:38:02.535187   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt.983df103: {Name:mkeccc533f00b04d2cce8db1a6afef807015034e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.539692   49739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key.983df103 ...
	I0722 11:38:02.539726   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key.983df103: {Name:mk26664c0509904a666d1ffe5ed480186a0f1aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.539865   49739 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt.983df103 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt
	I0722 11:38:02.539994   49739 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key.983df103 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key
	I0722 11:38:02.540077   49739 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key
	I0722 11:38:02.540099   49739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.crt with IP's: []
	I0722 11:38:02.693799   49739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.crt ...
	I0722 11:38:02.693831   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.crt: {Name:mk7754712645d1f2057a60aeb70111f72a22fed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.695341   49739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key ...
	I0722 11:38:02.695372   49739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key: {Name:mkca42b552316d3ecb031c4937cd095182cc95e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:38:02.695585   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:38:02.695630   49739 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:38:02.695643   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:38:02.695672   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:38:02.695701   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:38:02.695728   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:38:02.695773   49739 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:38:02.696548   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:38:02.728085   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:38:02.757021   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:38:02.791411   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:38:02.817577   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 11:38:02.850115   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:38:02.878706   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:38:02.906608   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:38:02.932996   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:38:02.961152   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:38:02.990417   49739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:38:03.020689   49739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:38:03.040028   49739 ssh_runner.go:195] Run: openssl version
	I0722 11:38:03.046208   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:38:03.062693   49739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:38:03.068820   49739 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:38:03.068896   49739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:38:03.076774   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:38:03.089965   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:38:03.105307   49739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:38:03.110551   49739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:38:03.110626   49739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:38:03.118417   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:38:03.135096   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:38:03.148719   49739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:38:03.155803   49739 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:38:03.155857   49739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:38:03.166393   49739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:38:03.182283   49739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:38:03.187856   49739 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 11:38:03.187916   49739 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:38:03.187995   49739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:38:03.188049   49739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:38:03.238341   49739 cri.go:89] found id: ""
	I0722 11:38:03.238419   49739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:38:03.252674   49739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:38:03.262837   49739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:38:03.273632   49739 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:38:03.273650   49739 kubeadm.go:157] found existing configuration files:
	
	I0722 11:38:03.273691   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:38:03.286222   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:38:03.286301   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:38:03.296245   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:38:03.305385   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:38:03.305437   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:38:03.315270   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:38:03.324952   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:38:03.325003   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:38:03.335463   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:38:03.346847   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:38:03.346920   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:38:03.356986   49739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:38:03.477331   49739 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:38:03.477527   49739 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:38:03.660089   49739 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:38:03.660239   49739 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:38:03.660373   49739 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:38:03.884761   49739 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:38:03.946683   49739 out.go:204]   - Generating certificates and keys ...
	I0722 11:38:03.946799   49739 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:38:03.946899   49739 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:38:04.053030   49739 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 11:38:04.237099   49739 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 11:38:04.313446   49739 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 11:38:04.450337   49739 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 11:38:04.662769   49739 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 11:38:04.663627   49739 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0722 11:38:04.860555   49739 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 11:38:04.861015   49739 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0722 11:38:05.175306   49739 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 11:38:05.252704   49739 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 11:38:05.427273   49739 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 11:38:05.427709   49739 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:38:05.510716   49739 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:38:05.745312   49739 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:38:05.857530   49739 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:38:06.046520   49739 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:38:06.070667   49739 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:38:06.071806   49739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:38:06.071878   49739 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:38:06.233582   49739 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:38:06.234882   49739 out.go:204]   - Booting up control plane ...
	I0722 11:38:06.235006   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:38:06.250480   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:38:06.251784   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:38:06.252839   49739 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:38:06.261103   49739 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:38:46.255479   49739 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:38:46.256289   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:38:46.256564   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:38:51.257167   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:38:51.257390   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:39:01.256743   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:39:01.256967   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:39:21.255978   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:39:21.256178   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:40:01.257715   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:40:01.258000   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:40:01.258024   49739 kubeadm.go:310] 
	I0722 11:40:01.258101   49739 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:40:01.258184   49739 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:40:01.258195   49739 kubeadm.go:310] 
	I0722 11:40:01.258247   49739 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:40:01.258293   49739 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:40:01.258441   49739 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:40:01.258455   49739 kubeadm.go:310] 
	I0722 11:40:01.258601   49739 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:40:01.258648   49739 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:40:01.258694   49739 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:40:01.258704   49739 kubeadm.go:310] 
	I0722 11:40:01.258862   49739 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:40:01.259012   49739 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:40:01.259029   49739 kubeadm.go:310] 
	I0722 11:40:01.259176   49739 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:40:01.259295   49739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:40:01.259395   49739 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:40:01.259494   49739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:40:01.259504   49739 kubeadm.go:310] 
	I0722 11:40:01.260780   49739 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:40:01.260866   49739 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:40:01.261036   49739 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 11:40:01.261123   49739 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-651148 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:40:01.261191   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:40:02.013305   49739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:40:02.027721   49739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:40:02.040682   49739 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:40:02.040705   49739 kubeadm.go:157] found existing configuration files:
	
	I0722 11:40:02.040768   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:40:02.053169   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:40:02.053237   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:40:02.066139   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:40:02.077910   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:40:02.077971   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:40:02.089023   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:40:02.098148   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:40:02.098198   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:40:02.108619   49739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:40:02.122245   49739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:40:02.122308   49739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:40:02.135392   49739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:40:02.226524   49739 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:40:02.226596   49739 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:40:02.398821   49739 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:40:02.398988   49739 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:40:02.399125   49739 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:40:02.599872   49739 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:40:02.665019   49739 out.go:204]   - Generating certificates and keys ...
	I0722 11:40:02.665159   49739 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:40:02.665259   49739 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:40:02.665370   49739 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:40:02.665515   49739 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:40:02.665623   49739 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:40:02.665685   49739 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:40:02.665758   49739 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:40:02.665857   49739 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:40:02.665970   49739 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:40:02.666069   49739 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:40:02.666124   49739 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:40:02.666204   49739 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:40:02.812354   49739 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:40:03.158513   49739 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:40:03.417138   49739 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:40:03.737537   49739 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:40:03.755783   49739 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:40:03.755968   49739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:40:03.756068   49739 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:40:03.930511   49739 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:40:03.971270   49739 out.go:204]   - Booting up control plane ...
	I0722 11:40:03.971412   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:40:03.971514   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:40:03.971593   49739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:40:03.971691   49739 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:40:03.971879   49739 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:40:43.962735   49739 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:40:43.963534   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:40:43.963790   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:40:48.964455   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:40:48.964681   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:40:58.965743   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:40:58.966026   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:41:18.964648   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:41:18.964844   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:41:58.964037   49739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:41:58.964253   49739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:41:58.964266   49739 kubeadm.go:310] 
	I0722 11:41:58.964326   49739 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:41:58.964368   49739 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:41:58.964400   49739 kubeadm.go:310] 
	I0722 11:41:58.964453   49739 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:41:58.964507   49739 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:41:58.964669   49739 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:41:58.964687   49739 kubeadm.go:310] 
	I0722 11:41:58.964769   49739 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:41:58.964840   49739 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:41:58.964891   49739 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:41:58.964913   49739 kubeadm.go:310] 
	I0722 11:41:58.965060   49739 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:41:58.965190   49739 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:41:58.965204   49739 kubeadm.go:310] 
	I0722 11:41:58.965334   49739 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:41:58.965437   49739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:41:58.965501   49739 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:41:58.965588   49739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:41:58.965610   49739 kubeadm.go:310] 
	I0722 11:41:58.966521   49739 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:41:58.966654   49739 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:41:58.966814   49739 kubeadm.go:394] duration metric: took 3m55.778898164s to StartCluster
	I0722 11:41:58.966879   49739 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:41:58.966954   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:41:58.967022   49739 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:41:59.010013   49739 cri.go:89] found id: ""
	I0722 11:41:59.010036   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.010044   49739 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:41:59.010052   49739 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:41:59.010099   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:41:59.046653   49739 cri.go:89] found id: ""
	I0722 11:41:59.046679   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.046686   49739 logs.go:278] No container was found matching "etcd"
	I0722 11:41:59.046692   49739 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:41:59.046754   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:41:59.081604   49739 cri.go:89] found id: ""
	I0722 11:41:59.081630   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.081637   49739 logs.go:278] No container was found matching "coredns"
	I0722 11:41:59.081643   49739 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:41:59.081696   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:41:59.117342   49739 cri.go:89] found id: ""
	I0722 11:41:59.117367   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.117376   49739 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:41:59.117383   49739 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:41:59.117439   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:41:59.152021   49739 cri.go:89] found id: ""
	I0722 11:41:59.152051   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.152062   49739 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:41:59.152070   49739 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:41:59.152125   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:41:59.188674   49739 cri.go:89] found id: ""
	I0722 11:41:59.188707   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.188727   49739 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:41:59.188735   49739 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:41:59.188792   49739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:41:59.238447   49739 cri.go:89] found id: ""
	I0722 11:41:59.238477   49739 logs.go:276] 0 containers: []
	W0722 11:41:59.238488   49739 logs.go:278] No container was found matching "kindnet"
	I0722 11:41:59.238499   49739 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:41:59.238517   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:41:59.373728   49739 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:41:59.373760   49739 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:41:59.373776   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:41:59.478613   49739 logs.go:123] Gathering logs for container status ...
	I0722 11:41:59.478653   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:41:59.530378   49739 logs.go:123] Gathering logs for kubelet ...
	I0722 11:41:59.530417   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:41:59.589685   49739 logs.go:123] Gathering logs for dmesg ...
	I0722 11:41:59.589724   49739 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0722 11:41:59.604509   49739 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:41:59.604554   49739 out.go:239] * 
	* 
	W0722 11:41:59.604619   49739 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:41:59.604650   49739 out.go:239] * 
	* 
	W0722 11:41:59.605483   49739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:41:59.608877   49739 out.go:177] 
	W0722 11:41:59.610115   49739 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:41:59.610170   49739 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:41:59.610195   49739 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:41:59.611616   49739 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-651148
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-651148: (1.408891507s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-651148 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-651148 status --format={{.Host}}: exit status 7 (71.988434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.754985605s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-651148 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.755317ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-651148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-651148
	    minikube start -p kubernetes-upgrade-651148 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6511482 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-651148 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-651148 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m10.584100592s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-22 11:44:51.623448773 +0000 UTC m=+4562.350863109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-651148 -n kubernetes-upgrade-651148
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-651148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-651148 logs -n 25: (3.532133135s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-812059                                       | pause-812059              | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:39 UTC |
	|         | --alsologtostderr -v=5                                |                           |         |         |                     |                     |
	| pause   | -p pause-812059                                       | pause-812059              | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:39 UTC |
	|         | --alsologtostderr -v=5                                |                           |         |         |                     |                     |
	| delete  | -p pause-812059                                       | pause-812059              | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:39 UTC |
	|         | --alsologtostderr -v=5                                |                           |         |         |                     |                     |
	| delete  | -p pause-812059                                       | pause-812059              | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:39 UTC |
	| start   | -p force-systemd-flag-989072                          | force-systemd-flag-989072 | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:40 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-006328 stop                           | minikube                  | jenkins | v1.26.0 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:39 UTC |
	| start   | -p stopped-upgrade-006328                             | stopped-upgrade-006328    | jenkins | v1.33.1 | 22 Jul 24 11:39 UTC | 22 Jul 24 11:41 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-989072 ssh cat                     | force-systemd-flag-989072 | jenkins | v1.33.1 | 22 Jul 24 11:40 UTC | 22 Jul 24 11:40 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-989072                          | force-systemd-flag-989072 | jenkins | v1.33.1 | 22 Jul 24 11:40 UTC | 22 Jul 24 11:40 UTC |
	| start   | -p cert-options-435680                                | cert-options-435680       | jenkins | v1.33.1 | 22 Jul 24 11:40 UTC | 22 Jul 24 11:41 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-006328                             | stopped-upgrade-006328    | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:41 UTC |
	| start   | -p old-k8s-version-101261                             | old-k8s-version-101261    | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | cert-options-435680 ssh                               | cert-options-435680       | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:41 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-435680 -- sudo                        | cert-options-435680       | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:41 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-435680                                | cert-options-435680       | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:41 UTC |
	| start   | -p no-preload-339929 --memory=2200                    | no-preload-339929         | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:43 UTC |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-651148                          | kubernetes-upgrade-651148 | jenkins | v1.33.1 | 22 Jul 24 11:41 UTC | 22 Jul 24 11:42 UTC |
	| start   | -p kubernetes-upgrade-651148                          | kubernetes-upgrade-651148 | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                          | kubernetes-upgrade-651148 | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                          | kubernetes-upgrade-651148 | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-467176                             | cert-expiration-467176    | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929            | no-preload-339929         | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-339929                                  | no-preload-339929         | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                             | cert-expiration-467176    | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                 | embed-certs-802149        | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                          |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:43:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:43:43.382856   57359 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:43:43.382959   57359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:43:43.382967   57359 out.go:304] Setting ErrFile to fd 2...
	I0722 11:43:43.382971   57359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:43:43.383150   57359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:43:43.383664   57359 out.go:298] Setting JSON to false
	I0722 11:43:43.384550   57359 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5175,"bootTime":1721643448,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:43:43.384600   57359 start.go:139] virtualization: kvm guest
	I0722 11:43:43.386459   57359 out.go:177] * [embed-certs-802149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:43:43.387692   57359 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:43:43.387720   57359 notify.go:220] Checking for updates...
	I0722 11:43:43.390031   57359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:43:43.391177   57359 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:43:43.392391   57359 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:43:43.393524   57359 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:43:43.394685   57359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:43:43.396315   57359 config.go:182] Loaded profile config "kubernetes-upgrade-651148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:43:43.396459   57359 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:43:43.396553   57359 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:43:43.396643   57359 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:43:43.431219   57359 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 11:43:43.432432   57359 start.go:297] selected driver: kvm2
	I0722 11:43:43.432448   57359 start.go:901] validating driver "kvm2" against <nil>
	I0722 11:43:43.432461   57359 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:43:43.433462   57359 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:43:43.433545   57359 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:43:43.448437   57359 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:43:43.448474   57359 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 11:43:43.448679   57359 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:43:43.448726   57359 cni.go:84] Creating CNI manager for ""
	I0722 11:43:43.448735   57359 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:43:43.448745   57359 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 11:43:43.448789   57359 start.go:340] cluster config:
	{Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:43:43.448884   57359 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:43:43.450474   57359 out.go:177] * Starting "embed-certs-802149" primary control-plane node in "embed-certs-802149" cluster
	I0722 11:43:43.451712   57359 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:43:43.451747   57359 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:43:43.451756   57359 cache.go:56] Caching tarball of preloaded images
	I0722 11:43:43.451839   57359 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:43:43.451849   57359 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:43:43.451946   57359 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:43:43.451969   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json: {Name:mkc0c12cabeb171fc85d6674f1e79989cd1d435f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:43:43.452114   57359 start.go:360] acquireMachinesLock for embed-certs-802149: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:43:43.452148   57359 start.go:364] duration metric: took 17.172µs to acquireMachinesLock for "embed-certs-802149"
	I0722 11:43:43.452171   57359 start.go:93] Provisioning new machine with config: &{Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:43:43.452256   57359 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 11:43:43.454531   57359 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 11:43:43.454659   57359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:43:43.454698   57359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:43:43.469064   57359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0722 11:43:43.469425   57359 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:43:43.469953   57359 main.go:141] libmachine: Using API Version  1
	I0722 11:43:43.469974   57359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:43:43.470297   57359 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:43:43.470455   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:43:43.470577   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:43:43.470699   57359 start.go:159] libmachine.API.Create for "embed-certs-802149" (driver="kvm2")
	I0722 11:43:43.470722   57359 client.go:168] LocalClient.Create starting
	I0722 11:43:43.470752   57359 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 11:43:43.470784   57359 main.go:141] libmachine: Decoding PEM data...
	I0722 11:43:43.470802   57359 main.go:141] libmachine: Parsing certificate...
	I0722 11:43:43.470846   57359 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 11:43:43.470863   57359 main.go:141] libmachine: Decoding PEM data...
	I0722 11:43:43.470874   57359 main.go:141] libmachine: Parsing certificate...
	I0722 11:43:43.470889   57359 main.go:141] libmachine: Running pre-create checks...
	I0722 11:43:43.470896   57359 main.go:141] libmachine: (embed-certs-802149) Calling .PreCreateCheck
	I0722 11:43:43.471210   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:43:43.471514   57359 main.go:141] libmachine: Creating machine...
	I0722 11:43:43.471527   57359 main.go:141] libmachine: (embed-certs-802149) Calling .Create
	I0722 11:43:43.471632   57359 main.go:141] libmachine: (embed-certs-802149) Creating KVM machine...
	I0722 11:43:43.472766   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found existing default KVM network
	I0722 11:43:43.473818   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.473683   57382 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:a5:f9} reservation:<nil>}
	I0722 11:43:43.474504   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.474450   57382 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7d:3b:41} reservation:<nil>}
	I0722 11:43:43.475282   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.475230   57382 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a4:b8:6a} reservation:<nil>}
	I0722 11:43:43.476256   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.476195   57382 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c17f0}
	I0722 11:43:43.476335   57359 main.go:141] libmachine: (embed-certs-802149) DBG | created network xml: 
	I0722 11:43:43.476374   57359 main.go:141] libmachine: (embed-certs-802149) DBG | <network>
	I0722 11:43:43.476412   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   <name>mk-embed-certs-802149</name>
	I0722 11:43:43.476434   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   <dns enable='no'/>
	I0722 11:43:43.476486   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   
	I0722 11:43:43.476507   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0722 11:43:43.476525   57359 main.go:141] libmachine: (embed-certs-802149) DBG |     <dhcp>
	I0722 11:43:43.476548   57359 main.go:141] libmachine: (embed-certs-802149) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0722 11:43:43.476560   57359 main.go:141] libmachine: (embed-certs-802149) DBG |     </dhcp>
	I0722 11:43:43.476568   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   </ip>
	I0722 11:43:43.476576   57359 main.go:141] libmachine: (embed-certs-802149) DBG |   
	I0722 11:43:43.476583   57359 main.go:141] libmachine: (embed-certs-802149) DBG | </network>
	I0722 11:43:43.476590   57359 main.go:141] libmachine: (embed-certs-802149) DBG | 
	I0722 11:43:43.481076   57359 main.go:141] libmachine: (embed-certs-802149) DBG | trying to create private KVM network mk-embed-certs-802149 192.168.72.0/24...
	I0722 11:43:43.544480   57359 main.go:141] libmachine: (embed-certs-802149) DBG | private KVM network mk-embed-certs-802149 192.168.72.0/24 created
	I0722 11:43:43.544555   57359 main.go:141] libmachine: (embed-certs-802149) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149 ...
	I0722 11:43:43.544583   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.544440   57382 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:43:43.544608   57359 main.go:141] libmachine: (embed-certs-802149) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 11:43:43.544637   57359 main.go:141] libmachine: (embed-certs-802149) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 11:43:43.776467   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.776354   57382 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa...
	I0722 11:43:43.977042   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.976930   57382 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/embed-certs-802149.rawdisk...
	I0722 11:43:43.977069   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Writing magic tar header
	I0722 11:43:43.977081   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Writing SSH key tar header
	I0722 11:43:43.977089   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:43.977058   57382 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149 ...
	I0722 11:43:43.977186   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149
	I0722 11:43:43.977205   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 11:43:43.977226   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149 (perms=drwx------)
	I0722 11:43:43.977255   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:43:43.977270   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 11:43:43.977276   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 11:43:43.977283   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home/jenkins
	I0722 11:43:43.977289   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Checking permissions on dir: /home
	I0722 11:43:43.977298   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 11:43:43.977306   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Skipping /home - not owner
	I0722 11:43:43.977315   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 11:43:43.977323   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 11:43:43.977329   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 11:43:43.977339   57359 main.go:141] libmachine: (embed-certs-802149) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 11:43:43.977343   57359 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:43:43.978474   57359 main.go:141] libmachine: (embed-certs-802149) define libvirt domain using xml: 
	I0722 11:43:43.978498   57359 main.go:141] libmachine: (embed-certs-802149) <domain type='kvm'>
	I0722 11:43:43.978509   57359 main.go:141] libmachine: (embed-certs-802149)   <name>embed-certs-802149</name>
	I0722 11:43:43.978523   57359 main.go:141] libmachine: (embed-certs-802149)   <memory unit='MiB'>2200</memory>
	I0722 11:43:43.978539   57359 main.go:141] libmachine: (embed-certs-802149)   <vcpu>2</vcpu>
	I0722 11:43:43.978547   57359 main.go:141] libmachine: (embed-certs-802149)   <features>
	I0722 11:43:43.978565   57359 main.go:141] libmachine: (embed-certs-802149)     <acpi/>
	I0722 11:43:43.978575   57359 main.go:141] libmachine: (embed-certs-802149)     <apic/>
	I0722 11:43:43.978586   57359 main.go:141] libmachine: (embed-certs-802149)     <pae/>
	I0722 11:43:43.978595   57359 main.go:141] libmachine: (embed-certs-802149)     
	I0722 11:43:43.978606   57359 main.go:141] libmachine: (embed-certs-802149)   </features>
	I0722 11:43:43.978617   57359 main.go:141] libmachine: (embed-certs-802149)   <cpu mode='host-passthrough'>
	I0722 11:43:43.978628   57359 main.go:141] libmachine: (embed-certs-802149)   
	I0722 11:43:43.978636   57359 main.go:141] libmachine: (embed-certs-802149)   </cpu>
	I0722 11:43:43.978648   57359 main.go:141] libmachine: (embed-certs-802149)   <os>
	I0722 11:43:43.978658   57359 main.go:141] libmachine: (embed-certs-802149)     <type>hvm</type>
	I0722 11:43:43.978684   57359 main.go:141] libmachine: (embed-certs-802149)     <boot dev='cdrom'/>
	I0722 11:43:43.978705   57359 main.go:141] libmachine: (embed-certs-802149)     <boot dev='hd'/>
	I0722 11:43:43.978715   57359 main.go:141] libmachine: (embed-certs-802149)     <bootmenu enable='no'/>
	I0722 11:43:43.978725   57359 main.go:141] libmachine: (embed-certs-802149)   </os>
	I0722 11:43:43.978734   57359 main.go:141] libmachine: (embed-certs-802149)   <devices>
	I0722 11:43:43.978742   57359 main.go:141] libmachine: (embed-certs-802149)     <disk type='file' device='cdrom'>
	I0722 11:43:43.978753   57359 main.go:141] libmachine: (embed-certs-802149)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/boot2docker.iso'/>
	I0722 11:43:43.978769   57359 main.go:141] libmachine: (embed-certs-802149)       <target dev='hdc' bus='scsi'/>
	I0722 11:43:43.978795   57359 main.go:141] libmachine: (embed-certs-802149)       <readonly/>
	I0722 11:43:43.978823   57359 main.go:141] libmachine: (embed-certs-802149)     </disk>
	I0722 11:43:43.978837   57359 main.go:141] libmachine: (embed-certs-802149)     <disk type='file' device='disk'>
	I0722 11:43:43.978849   57359 main.go:141] libmachine: (embed-certs-802149)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 11:43:43.978865   57359 main.go:141] libmachine: (embed-certs-802149)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/embed-certs-802149.rawdisk'/>
	I0722 11:43:43.978875   57359 main.go:141] libmachine: (embed-certs-802149)       <target dev='hda' bus='virtio'/>
	I0722 11:43:43.978887   57359 main.go:141] libmachine: (embed-certs-802149)     </disk>
	I0722 11:43:43.978901   57359 main.go:141] libmachine: (embed-certs-802149)     <interface type='network'>
	I0722 11:43:43.978919   57359 main.go:141] libmachine: (embed-certs-802149)       <source network='mk-embed-certs-802149'/>
	I0722 11:43:43.978932   57359 main.go:141] libmachine: (embed-certs-802149)       <model type='virtio'/>
	I0722 11:43:43.978941   57359 main.go:141] libmachine: (embed-certs-802149)     </interface>
	I0722 11:43:43.978951   57359 main.go:141] libmachine: (embed-certs-802149)     <interface type='network'>
	I0722 11:43:43.978971   57359 main.go:141] libmachine: (embed-certs-802149)       <source network='default'/>
	I0722 11:43:43.978978   57359 main.go:141] libmachine: (embed-certs-802149)       <model type='virtio'/>
	I0722 11:43:43.978984   57359 main.go:141] libmachine: (embed-certs-802149)     </interface>
	I0722 11:43:43.978991   57359 main.go:141] libmachine: (embed-certs-802149)     <serial type='pty'>
	I0722 11:43:43.978996   57359 main.go:141] libmachine: (embed-certs-802149)       <target port='0'/>
	I0722 11:43:43.979003   57359 main.go:141] libmachine: (embed-certs-802149)     </serial>
	I0722 11:43:43.979009   57359 main.go:141] libmachine: (embed-certs-802149)     <console type='pty'>
	I0722 11:43:43.979014   57359 main.go:141] libmachine: (embed-certs-802149)       <target type='serial' port='0'/>
	I0722 11:43:43.979020   57359 main.go:141] libmachine: (embed-certs-802149)     </console>
	I0722 11:43:43.979026   57359 main.go:141] libmachine: (embed-certs-802149)     <rng model='virtio'>
	I0722 11:43:43.979032   57359 main.go:141] libmachine: (embed-certs-802149)       <backend model='random'>/dev/random</backend>
	I0722 11:43:43.979039   57359 main.go:141] libmachine: (embed-certs-802149)     </rng>
	I0722 11:43:43.979044   57359 main.go:141] libmachine: (embed-certs-802149)     
	I0722 11:43:43.979050   57359 main.go:141] libmachine: (embed-certs-802149)     
	I0722 11:43:43.979055   57359 main.go:141] libmachine: (embed-certs-802149)   </devices>
	I0722 11:43:43.979062   57359 main.go:141] libmachine: (embed-certs-802149) </domain>
	I0722 11:43:43.979069   57359 main.go:141] libmachine: (embed-certs-802149) 
	I0722 11:43:43.982927   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:b5:54:7a in network default
	I0722 11:43:43.983479   57359 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:43:43.983497   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:43.984137   57359 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:43:43.984496   57359 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:43:43.984939   57359 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:43:43.985542   57359 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:43:45.172813   57359 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:43:45.173590   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:45.173973   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:45.174012   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:45.173970   57382 retry.go:31] will retry after 287.978194ms: waiting for machine to come up
	I0722 11:43:45.463466   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:45.463953   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:45.463974   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:45.463905   57382 retry.go:31] will retry after 289.960629ms: waiting for machine to come up
	I0722 11:43:45.755358   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:45.755803   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:45.755830   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:45.755756   57382 retry.go:31] will retry after 350.56538ms: waiting for machine to come up
	I0722 11:43:46.108282   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:46.108815   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:46.108840   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:46.108771   57382 retry.go:31] will retry after 549.760506ms: waiting for machine to come up
	I0722 11:43:46.660443   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:46.660869   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:46.660893   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:46.660825   57382 retry.go:31] will retry after 556.012903ms: waiting for machine to come up
	I0722 11:43:47.218355   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:47.218854   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:47.218897   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:47.218815   57382 retry.go:31] will retry after 905.998408ms: waiting for machine to come up
	I0722 11:43:48.125811   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:48.126382   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:48.126413   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:48.126347   57382 retry.go:31] will retry after 766.457701ms: waiting for machine to come up
	I0722 11:43:48.894581   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:48.895021   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:48.895057   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:48.894987   57382 retry.go:31] will retry after 1.294536595s: waiting for machine to come up
	I0722 11:43:50.191487   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:50.191984   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:50.192008   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:50.191944   57382 retry.go:31] will retry after 1.506494797s: waiting for machine to come up
	I0722 11:43:51.700730   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:51.701207   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:51.701238   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:51.701142   57382 retry.go:31] will retry after 1.89371622s: waiting for machine to come up
	I0722 11:43:53.597366   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:53.597976   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:53.598020   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:53.597939   57382 retry.go:31] will retry after 1.949352428s: waiting for machine to come up
	I0722 11:43:55.548440   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:55.548878   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:55.548897   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:55.548853   57382 retry.go:31] will retry after 3.585790047s: waiting for machine to come up
	I0722 11:43:59.135853   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:43:59.136310   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:43:59.136331   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:43:59.136266   57382 retry.go:31] will retry after 2.817066799s: waiting for machine to come up
	I0722 11:44:01.957096   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:01.957502   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:44:01.957534   57359 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:44:01.957473   57382 retry.go:31] will retry after 5.459171651s: waiting for machine to come up
	I0722 11:44:07.420370   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.420909   57359 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:44:07.420933   57359 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:44:07.420962   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.421328   57359 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149
	I0722 11:44:07.496412   57359 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:44:07.496444   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:44:07.496453   57359 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:44:07.499282   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.499752   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:07.499783   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.499907   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:44:07.499934   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:44:07.499976   57359 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:44:07.499997   57359 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:44:07.500010   57359 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:44:07.620133   57359 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:44:07.620355   57359 main.go:141] libmachine: (embed-certs-802149) KVM machine creation complete!
	I0722 11:44:07.620712   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:44:07.621259   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:07.621434   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:07.621605   57359 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 11:44:07.621638   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:44:07.622969   57359 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 11:44:07.622986   57359 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 11:44:07.622994   57359 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 11:44:07.623002   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:07.625081   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.625446   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:07.625471   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.625592   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:07.625747   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.625897   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.625993   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:07.626129   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:07.626301   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:07.626312   57359 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 11:44:07.731368   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:44:07.731390   57359 main.go:141] libmachine: Detecting the provisioner...
	I0722 11:44:07.731397   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:07.734078   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.734444   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:07.734469   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.734658   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:07.734851   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.734985   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.735099   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:07.735250   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:07.735468   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:07.735489   57359 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 11:44:07.832944   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 11:44:07.833025   57359 main.go:141] libmachine: found compatible host: buildroot
	I0722 11:44:07.833039   57359 main.go:141] libmachine: Provisioning with buildroot...
	I0722 11:44:07.833054   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:44:07.833326   57359 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:44:07.833356   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:44:07.833556   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:07.836278   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.836612   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:07.836648   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.836778   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:07.836937   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.837058   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.837198   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:07.837335   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:07.837498   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:07.837509   57359 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:44:07.950769   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:44:07.950794   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:07.953463   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.953795   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:07.953816   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:07.954010   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:07.954167   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.954347   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:07.954485   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:07.954638   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:07.954834   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:07.954852   57359 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:44:08.060714   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:44:08.060748   57359 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:44:08.060765   57359 buildroot.go:174] setting up certificates
	I0722 11:44:08.060773   57359 provision.go:84] configureAuth start
	I0722 11:44:08.060782   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:44:08.061098   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:44:08.063615   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.064006   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.064030   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.064140   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.066256   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.066570   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.066607   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.066759   57359 provision.go:143] copyHostCerts
	I0722 11:44:08.066833   57359 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:44:08.066850   57359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:44:08.066933   57359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:44:08.067039   57359 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:44:08.067048   57359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:44:08.067084   57359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:44:08.067152   57359 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:44:08.067167   57359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:44:08.067203   57359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:44:08.067267   57359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:44:08.227941   57359 provision.go:177] copyRemoteCerts
	I0722 11:44:08.228001   57359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:44:08.228028   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.230564   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.230968   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.230995   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.231126   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.231326   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.231486   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.231649   57359 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:44:08.312029   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:44:08.335007   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 11:44:08.357390   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:44:08.379356   57359 provision.go:87] duration metric: took 318.571804ms to configureAuth
	I0722 11:44:08.379375   57359 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:44:08.379531   57359 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:44:08.379593   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.382216   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.382537   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.382565   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.382765   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.382970   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.383115   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.383251   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.383369   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:08.383516   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:08.383533   57359 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:44:08.630088   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:44:08.630112   57359 main.go:141] libmachine: Checking connection to Docker...
	I0722 11:44:08.630121   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetURL
	I0722 11:44:08.631368   57359 main.go:141] libmachine: (embed-certs-802149) DBG | Using libvirt version 6000000
	I0722 11:44:08.633490   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.633818   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.633847   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.633972   57359 main.go:141] libmachine: Docker is up and running!
	I0722 11:44:08.633996   57359 main.go:141] libmachine: Reticulating splines...
	I0722 11:44:08.634004   57359 client.go:171] duration metric: took 25.163271983s to LocalClient.Create
	I0722 11:44:08.634044   57359 start.go:167] duration metric: took 25.163333253s to libmachine.API.Create "embed-certs-802149"
	I0722 11:44:08.634053   57359 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:44:08.634062   57359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:44:08.634079   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:08.634329   57359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:44:08.634350   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.636349   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.636652   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.636683   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.636828   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.637004   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.637152   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.637274   57359 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:44:08.714552   57359 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:44:08.718490   57359 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:44:08.718515   57359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:44:08.718577   57359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:44:08.718647   57359 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:44:08.718737   57359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:44:08.727979   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:44:08.751169   57359 start.go:296] duration metric: took 117.10383ms for postStartSetup
	I0722 11:44:08.751216   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:44:08.751770   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:44:08.754376   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.754725   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.754747   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.754961   57359 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:44:08.755109   57359 start.go:128] duration metric: took 25.302842893s to createHost
	I0722 11:44:08.755125   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.757175   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.757471   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.757490   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.757651   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.757832   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.757990   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.758401   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.759585   57359 main.go:141] libmachine: Using SSH client type: native
	I0722 11:44:08.759737   57359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:44:08.759748   57359 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:44:08.856626   57359 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721648648.834981100
	
	I0722 11:44:08.856654   57359 fix.go:216] guest clock: 1721648648.834981100
	I0722 11:44:08.856664   57359 fix.go:229] Guest: 2024-07-22 11:44:08.8349811 +0000 UTC Remote: 2024-07-22 11:44:08.75511765 +0000 UTC m=+25.406069682 (delta=79.86345ms)
	I0722 11:44:08.856700   57359 fix.go:200] guest clock delta is within tolerance: 79.86345ms
	I0722 11:44:08.856710   57359 start.go:83] releasing machines lock for "embed-certs-802149", held for 25.404550772s
	I0722 11:44:08.856734   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:08.856965   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:44:08.859154   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.859482   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.859509   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.859637   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:08.860111   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:08.860271   57359 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:08.860349   57359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:44:08.860414   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.860464   57359 ssh_runner.go:195] Run: cat /version.json
	I0722 11:44:08.860487   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:08.862927   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.863237   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.863274   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.863293   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.863395   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.863554   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.863624   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:08.863651   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:08.863707   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.863785   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:08.863848   57359 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:44:08.863925   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:08.864036   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:08.864173   57359 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:44:08.957078   57359 ssh_runner.go:195] Run: systemctl --version
	I0722 11:44:08.962727   57359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:44:09.116594   57359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:44:09.122404   57359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:44:09.122453   57359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:44:09.137633   57359 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:44:09.137652   57359 start.go:495] detecting cgroup driver to use...
	I0722 11:44:09.137699   57359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:44:09.156113   57359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:44:09.168972   57359 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:44:09.169025   57359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:44:09.181696   57359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:44:09.194513   57359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:44:09.302631   57359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:44:09.462222   57359 docker.go:233] disabling docker service ...
	I0722 11:44:09.462296   57359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:44:09.476494   57359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:44:09.488940   57359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:44:09.605621   57359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:44:09.732922   57359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:44:09.747036   57359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:44:09.764503   57359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:44:09.764563   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.774623   57359 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:44:09.774672   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.784536   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.794713   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.804639   57359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:44:09.814655   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.824355   57359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.840779   57359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:44:09.850659   57359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:44:09.859704   57359 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:44:09.859747   57359 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:44:09.872589   57359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:44:09.881459   57359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:44:09.995509   57359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:44:10.129631   57359 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:44:10.129708   57359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:44:10.134119   57359 start.go:563] Will wait 60s for crictl version
	I0722 11:44:10.134169   57359 ssh_runner.go:195] Run: which crictl
	I0722 11:44:10.137984   57359 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:44:10.177140   57359 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:44:10.177222   57359 ssh_runner.go:195] Run: crio --version
	I0722 11:44:10.203483   57359 ssh_runner.go:195] Run: crio --version
	I0722 11:44:10.231618   57359 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:44:10.232811   57359 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:44:10.235781   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:10.236173   57359 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:10.236198   57359 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:10.236428   57359 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:44:10.240177   57359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:44:10.252437   57359 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:44:10.252546   57359 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:44:10.252595   57359 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:44:10.287992   57359 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:44:10.288072   57359 ssh_runner.go:195] Run: which lz4
	I0722 11:44:10.291881   57359 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:44:10.295936   57359 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:44:10.295958   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:44:11.654916   57359 crio.go:462] duration metric: took 1.363078795s to copy over tarball
	I0722 11:44:11.654976   57359 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:44:13.831242   57359 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176229756s)
	I0722 11:44:13.831273   57359 crio.go:469] duration metric: took 2.176333222s to extract the tarball
	I0722 11:44:13.831282   57359 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:44:13.869405   57359 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:44:13.918978   57359 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:44:13.918999   57359 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:44:13.919006   57359 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:44:13.919110   57359 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:44:13.919177   57359 ssh_runner.go:195] Run: crio config
	I0722 11:44:13.970929   57359 cni.go:84] Creating CNI manager for ""
	I0722 11:44:13.970958   57359 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:44:13.970970   57359 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:44:13.970994   57359 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:44:13.971186   57359 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:44:13.971259   57359 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:44:13.981712   57359 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:44:13.981775   57359 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:44:13.991565   57359 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:44:14.009671   57359 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:44:14.027009   57359 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:44:14.044712   57359 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:44:14.048652   57359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:44:14.060585   57359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:44:14.180678   57359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:44:14.201804   57359 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:44:14.201830   57359 certs.go:194] generating shared ca certs ...
	I0722 11:44:14.201844   57359 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.202015   57359 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:44:14.202056   57359 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:44:14.202064   57359 certs.go:256] generating profile certs ...
	I0722 11:44:14.202132   57359 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:44:14.202149   57359 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.crt with IP's: []
	I0722 11:44:14.367082   57359 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.crt ...
	I0722 11:44:14.367108   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.crt: {Name:mkf90eecef231baac8f1b3de7abe2300e9f3a83e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.367289   57359 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key ...
	I0722 11:44:14.367304   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key: {Name:mk704685a33da741ca901d74a169ea3134f35ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.367432   57359 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:44:14.367457   57359 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt.447fbea1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.113]
	I0722 11:44:14.593079   57359 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt.447fbea1 ...
	I0722 11:44:14.593108   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt.447fbea1: {Name:mk24bfb4fcfd720586f44caf7e2d088ce67f3e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.626327   57359 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1 ...
	I0722 11:44:14.626362   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1: {Name:mk620aa08ead7770f5d59dffdfbca90bbd51bcaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.626486   57359 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt.447fbea1 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt
	I0722 11:44:14.626579   57359 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key
	I0722 11:44:14.626658   57359 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:44:14.626682   57359 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt with IP's: []
	I0722 11:44:14.746402   57359 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt ...
	I0722 11:44:14.746429   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt: {Name:mk9e8ef6801b7427b527ddad8b97f499a7a16629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.746589   57359 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key ...
	I0722 11:44:14.746600   57359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key: {Name:mk1b5ecdf41171e69c491485662d12d2e8b25416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:14.746756   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:44:14.746794   57359 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:44:14.746804   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:44:14.746825   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:44:14.746845   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:44:14.746868   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:44:14.746909   57359 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:44:14.747470   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:44:14.777002   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:44:14.799945   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:44:14.822853   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:44:14.849963   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:44:14.876843   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:44:14.903054   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:44:14.928981   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:44:14.955198   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:44:14.979125   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:44:15.005448   57359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:44:15.036334   57359 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:44:15.056552   57359 ssh_runner.go:195] Run: openssl version
	I0722 11:44:15.064788   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:44:15.078037   57359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:44:15.083219   57359 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:44:15.083271   57359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:44:15.088890   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:44:15.099417   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:44:15.109739   57359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:15.113851   57359 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:15.113886   57359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:15.119192   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:44:15.129358   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:44:15.139802   57359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:44:15.143996   57359 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:44:15.144053   57359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:44:15.149371   57359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:44:15.159809   57359 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:44:15.163771   57359 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 11:44:15.163821   57359 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:44:15.163898   57359 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:44:15.163955   57359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:44:15.206164   57359 cri.go:89] found id: ""
	I0722 11:44:15.206224   57359 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:44:15.216279   57359 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:44:15.226670   57359 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:44:15.238313   57359 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:44:15.238331   57359 kubeadm.go:157] found existing configuration files:
	
	I0722 11:44:15.238384   57359 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:44:15.247230   57359 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:44:15.247285   57359 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:44:15.256336   57359 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:44:15.266905   57359 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:44:15.266954   57359 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:44:15.277238   57359 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:44:15.286514   57359 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:44:15.286561   57359 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:44:15.295579   57359 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:44:15.306015   57359 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:44:15.306061   57359 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:44:15.315128   57359 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:44:15.428778   57359 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:44:15.428856   57359 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:44:15.554939   57359 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:44:15.555107   57359 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:44:15.555277   57359 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:44:15.759799   57359 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:44:17.174572   56872 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m32.095831553s)
	I0722 11:44:17.174604   56872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:44:17.174655   56872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:44:17.180228   56872 start.go:563] Will wait 60s for crictl version
	I0722 11:44:17.180287   56872 ssh_runner.go:195] Run: which crictl
	I0722 11:44:17.184260   56872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:44:17.229243   56872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:44:17.229331   56872 ssh_runner.go:195] Run: crio --version
	I0722 11:44:17.259344   56872 ssh_runner.go:195] Run: crio --version
	I0722 11:44:17.299022   56872 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:44:15.792891   57359 out.go:204]   - Generating certificates and keys ...
	I0722 11:44:15.793036   57359 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:44:15.793137   57359 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:44:15.946690   57359 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 11:44:16.094985   57359 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 11:44:16.212877   57359 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 11:44:16.477234   57359 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 11:44:16.594030   57359 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 11:44:16.594308   57359 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-802149 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	I0722 11:44:16.722280   57359 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 11:44:16.722613   57359 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-802149 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	I0722 11:44:16.884740   57359 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 11:44:17.062817   57359 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 11:44:17.198598   57359 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 11:44:17.198863   57359 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:44:17.343517   57359 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:44:17.484771   57359 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:44:17.810789   57359 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:44:17.886555   57359 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:44:18.003858   57359 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:44:18.004675   57359 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:44:18.007098   57359 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:44:18.008827   57359 out.go:204]   - Booting up control plane ...
	I0722 11:44:18.008949   57359 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:44:18.009064   57359 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:44:18.009455   57359 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:44:18.031435   57359 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:44:18.031577   57359 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:44:18.031634   57359 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:44:18.177640   57359 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:44:18.177764   57359 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:44:18.153777   55745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:44:18.153883   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:44:18.154173   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:44:17.300271   56872 main.go:141] libmachine: (kubernetes-upgrade-651148) Calling .GetIP
	I0722 11:44:17.302921   56872 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:44:17.303288   56872 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7d:ff", ip: ""} in network mk-kubernetes-upgrade-651148: {Iface:virbr4 ExpiryTime:2024-07-22 12:42:12 +0000 UTC Type:0 Mac:52:54:00:61:7d:ff Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:kubernetes-upgrade-651148 Clientid:01:52:54:00:61:7d:ff}
	I0722 11:44:17.303313   56872 main.go:141] libmachine: (kubernetes-upgrade-651148) DBG | domain kubernetes-upgrade-651148 has defined IP address 192.168.39.123 and MAC address 52:54:00:61:7d:ff in network mk-kubernetes-upgrade-651148
	I0722 11:44:17.303516   56872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:44:17.309320   56872 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:44:17.309410   56872 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:44:17.309463   56872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:44:17.360432   56872 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:44:17.360457   56872 crio.go:433] Images already preloaded, skipping extraction
	I0722 11:44:17.360507   56872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:44:17.394457   56872 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:44:17.394482   56872 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:44:17.394492   56872 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:44:17.394626   56872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-651148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:44:17.394717   56872 ssh_runner.go:195] Run: crio config
	I0722 11:44:17.443874   56872 cni.go:84] Creating CNI manager for ""
	I0722 11:44:17.443894   56872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:44:17.443903   56872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:44:17.443924   56872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-651148 NodeName:kubernetes-upgrade-651148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:44:17.444089   56872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-651148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:44:17.444160   56872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:44:17.453942   56872 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:44:17.453996   56872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:44:17.462788   56872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0722 11:44:17.481021   56872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:44:17.496944   56872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0722 11:44:17.513380   56872 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0722 11:44:17.517128   56872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:44:17.655441   56872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:44:17.671268   56872 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148 for IP: 192.168.39.123
	I0722 11:44:17.671294   56872 certs.go:194] generating shared ca certs ...
	I0722 11:44:17.671312   56872 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:44:17.671477   56872 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:44:17.671531   56872 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:44:17.671543   56872 certs.go:256] generating profile certs ...
	I0722 11:44:17.671623   56872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/client.key
	I0722 11:44:17.671665   56872 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key.983df103
	I0722 11:44:17.671700   56872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key
	I0722 11:44:17.671793   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:44:17.671822   56872 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:44:17.671831   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:44:17.671852   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:44:17.671872   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:44:17.671893   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:44:17.671933   56872 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:44:17.672956   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:44:17.696972   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:44:17.722268   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:44:17.746103   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:44:17.770952   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 11:44:17.793284   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:44:17.817072   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:44:17.839886   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/kubernetes-upgrade-651148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:44:17.863239   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:44:17.885803   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:44:17.908870   56872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:44:17.937049   56872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:44:17.952737   56872 ssh_runner.go:195] Run: openssl version
	I0722 11:44:17.958694   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:44:17.968926   56872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:44:17.973158   56872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:44:17.973206   56872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:44:17.978471   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:44:17.987134   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:44:18.019278   56872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:18.026422   56872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:18.026489   56872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:44:18.038934   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:44:18.094226   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:44:18.130021   56872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:44:18.162204   56872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:44:18.162280   56872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:44:18.190263   56872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:44:18.237969   56872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:44:18.252995   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:44:18.270510   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:44:18.280706   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:44:18.302306   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:44:18.316723   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:44:18.351518   56872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:44:18.361195   56872 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-651148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-651148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:44:18.361302   56872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:44:18.361366   56872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:44:18.443482   56872 cri.go:89] found id: "3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1"
	I0722 11:44:18.443508   56872 cri.go:89] found id: "67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb"
	I0722 11:44:18.443514   56872 cri.go:89] found id: "ef36ea1e134217a61d4669f1e73841080e8f36a116f48cdbe551e4fc8f6c9058"
	I0722 11:44:18.443518   56872 cri.go:89] found id: "6c610ae6d0af6227cf86a8be7ae943d6b1e1ce547d290888bc875c9e85513030"
	I0722 11:44:18.443522   56872 cri.go:89] found id: "a762e1f13695582be6eae74b21e0afa3371a316ea769fe250880e08a260d17f6"
	I0722 11:44:18.443526   56872 cri.go:89] found id: "48cfffce35f7fada7ccfdd7a93ec3f8084841dcca6fb3827d8af15d0871942b2"
	I0722 11:44:18.443530   56872 cri.go:89] found id: "71442ff782c927ac6f8502a13d1033ac699ccbf9c80557eefe20c2b7c0fa6dcf"
	I0722 11:44:18.443534   56872 cri.go:89] found id: "a6d84a4a683b566944cc0e587da5beba6e88c3eb001fd55854b2d9d9ef7d54c7"
	I0722 11:44:18.443538   56872 cri.go:89] found id: "c8a8487889c85e6d01da8ddfc7a4e95fd3c37e477bc7264185d554eae9848307"
	I0722 11:44:18.443547   56872 cri.go:89] found id: "f021f4a593b2f7df11f4ac994bbe2e47bee6077317557397f102d6fef6b8131f"
	I0722 11:44:18.443551   56872 cri.go:89] found id: "eec07af9e5d5047e1f186b47d7f497e0ef4a2bb563a87391cc701ef4d6b679c9"
	I0722 11:44:18.443556   56872 cri.go:89] found id: ""
	I0722 11:44:18.443609   56872 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.353519110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721648692353496479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4586daa8-c6da-4151-a251-c8c11df024b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.354228464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f143b09-6718-4e3a-9270-f2a322cbaf13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.354335105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f143b09-6718-4e3a-9270-f2a322cbaf13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.354844129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b847addf058fa3f70b64546f533794637362fa6b86fbf75a6d9b7769a44731f,PodSandboxId:ae7a94f6a58fdb3ad836ff6c5026b5fd58f0dca5153ab1756997695ad3d4d395,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690136424797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-qw2r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae99364b-d72c-4efb-a8cc-378e63c276ed,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad750ea9572051d43d27055142330b43ee1843da75b9b0e74d44811a6ddda39,PodSandboxId:2bdda16abcb202ef9354da9059105a7b2d496767891bca42d853d326967bce2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690064958770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v4djk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 350c6f5f-fccb-4d0e-951d-5774c992dcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6bf3b653d9dfc70530bf0ccc276ad08225108a9c953356afe9ebcaf90405ba,PodSandboxId:34bbdbe2892e154e80281792f4013e6937e741fe3f4077ed96f39b4ed048f688,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721648689723194539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0339f1f8453e8a2992455b590a12351f16bf70d8d83bf8f25e9cea47527cab4c,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1721648688116663518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fef5319d8e61250849ce38fe0da6dab4831e04eadde692fb10ad0b6a58ac77,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,Cr
eatedAt:1721648684552979757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd8e82e4c0eee7f90de18bf2d485ea291c26cea00e617028aa82a232788b779,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNIN
G,CreatedAt:1721648684542082323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5406f4b619f7fc20c1817bbe0d853afb4e503f84a76067c13f9ad9e95bdbbc5,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721648682716060861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26246094ac175a1803cde74c890546a517cc766e4450de9f0da6527128219af7,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:172
1648662052300269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d75f28d3b3a34d41283c74e6731d331f92846fef2641e3f498d915a6c9fc89,PodSandboxId:3b6bc5cecee4f5f73b0e0326b3cebc4edd43b40b990a6dedbbcb82ec23a719fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,Crea
tedAt:1721648661898241627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721648658342375915,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0f7815ea5a8bf7e85cdba257f47784b24fd067edfd240fa4c46ee691c8596e,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721648658381403366,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721648658325493054,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb,PodSandboxId:98c7130b12621d0cfe4c5031860a4945b6136e6786aeb542b7a145c264ef3c31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721648564457314540,Labels:map[string]string{io.kubernetes.container.name: ku
be-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f143b09-6718-4e3a-9270-f2a322cbaf13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.406922785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a741d1d-6bf3-46be-8a0b-0593528e6b6c name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.407063887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a741d1d-6bf3-46be-8a0b-0593528e6b6c name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.412825219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8465993-6b3a-4c6c-b7f5-2fdbd84c6d32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.413183875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721648692413161156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8465993-6b3a-4c6c-b7f5-2fdbd84c6d32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.413650986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2a2772b-45ef-4608-8628-ddac4bd56179 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.413802830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2a2772b-45ef-4608-8628-ddac4bd56179 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.414245431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b847addf058fa3f70b64546f533794637362fa6b86fbf75a6d9b7769a44731f,PodSandboxId:ae7a94f6a58fdb3ad836ff6c5026b5fd58f0dca5153ab1756997695ad3d4d395,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690136424797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-qw2r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae99364b-d72c-4efb-a8cc-378e63c276ed,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad750ea9572051d43d27055142330b43ee1843da75b9b0e74d44811a6ddda39,PodSandboxId:2bdda16abcb202ef9354da9059105a7b2d496767891bca42d853d326967bce2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690064958770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v4djk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 350c6f5f-fccb-4d0e-951d-5774c992dcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6bf3b653d9dfc70530bf0ccc276ad08225108a9c953356afe9ebcaf90405ba,PodSandboxId:34bbdbe2892e154e80281792f4013e6937e741fe3f4077ed96f39b4ed048f688,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721648689723194539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0339f1f8453e8a2992455b590a12351f16bf70d8d83bf8f25e9cea47527cab4c,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1721648688116663518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fef5319d8e61250849ce38fe0da6dab4831e04eadde692fb10ad0b6a58ac77,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,Cr
eatedAt:1721648684552979757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd8e82e4c0eee7f90de18bf2d485ea291c26cea00e617028aa82a232788b779,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNIN
G,CreatedAt:1721648684542082323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5406f4b619f7fc20c1817bbe0d853afb4e503f84a76067c13f9ad9e95bdbbc5,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721648682716060861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26246094ac175a1803cde74c890546a517cc766e4450de9f0da6527128219af7,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:172
1648662052300269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d75f28d3b3a34d41283c74e6731d331f92846fef2641e3f498d915a6c9fc89,PodSandboxId:3b6bc5cecee4f5f73b0e0326b3cebc4edd43b40b990a6dedbbcb82ec23a719fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,Crea
tedAt:1721648661898241627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721648658342375915,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0f7815ea5a8bf7e85cdba257f47784b24fd067edfd240fa4c46ee691c8596e,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721648658381403366,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721648658325493054,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb,PodSandboxId:98c7130b12621d0cfe4c5031860a4945b6136e6786aeb542b7a145c264ef3c31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721648564457314540,Labels:map[string]string{io.kubernetes.container.name: ku
be-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2a2772b-45ef-4608-8628-ddac4bd56179 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.459042342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb12dfaa-3b09-43e5-887b-564d2a92d886 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.459125649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb12dfaa-3b09-43e5-887b-564d2a92d886 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.460153342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64a538c3-91e0-4f8f-8693-c4ef73b5092c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.460809983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721648692460595361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64a538c3-91e0-4f8f-8693-c4ef73b5092c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.461369435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26344243-d5f2-4ad1-a397-f26771df8df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.461434886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26344243-d5f2-4ad1-a397-f26771df8df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.461829101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b847addf058fa3f70b64546f533794637362fa6b86fbf75a6d9b7769a44731f,PodSandboxId:ae7a94f6a58fdb3ad836ff6c5026b5fd58f0dca5153ab1756997695ad3d4d395,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690136424797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-qw2r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae99364b-d72c-4efb-a8cc-378e63c276ed,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad750ea9572051d43d27055142330b43ee1843da75b9b0e74d44811a6ddda39,PodSandboxId:2bdda16abcb202ef9354da9059105a7b2d496767891bca42d853d326967bce2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690064958770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v4djk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 350c6f5f-fccb-4d0e-951d-5774c992dcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6bf3b653d9dfc70530bf0ccc276ad08225108a9c953356afe9ebcaf90405ba,PodSandboxId:34bbdbe2892e154e80281792f4013e6937e741fe3f4077ed96f39b4ed048f688,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721648689723194539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0339f1f8453e8a2992455b590a12351f16bf70d8d83bf8f25e9cea47527cab4c,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1721648688116663518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fef5319d8e61250849ce38fe0da6dab4831e04eadde692fb10ad0b6a58ac77,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,Cr
eatedAt:1721648684552979757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd8e82e4c0eee7f90de18bf2d485ea291c26cea00e617028aa82a232788b779,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNIN
G,CreatedAt:1721648684542082323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5406f4b619f7fc20c1817bbe0d853afb4e503f84a76067c13f9ad9e95bdbbc5,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721648682716060861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26246094ac175a1803cde74c890546a517cc766e4450de9f0da6527128219af7,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:172
1648662052300269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d75f28d3b3a34d41283c74e6731d331f92846fef2641e3f498d915a6c9fc89,PodSandboxId:3b6bc5cecee4f5f73b0e0326b3cebc4edd43b40b990a6dedbbcb82ec23a719fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,Crea
tedAt:1721648661898241627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721648658342375915,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0f7815ea5a8bf7e85cdba257f47784b24fd067edfd240fa4c46ee691c8596e,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721648658381403366,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721648658325493054,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb,PodSandboxId:98c7130b12621d0cfe4c5031860a4945b6136e6786aeb542b7a145c264ef3c31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721648564457314540,Labels:map[string]string{io.kubernetes.container.name: ku
be-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26344243-d5f2-4ad1-a397-f26771df8df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.497058005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f745f801-92cd-449a-be0d-3776c9aeb9a3 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.497180601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f745f801-92cd-449a-be0d-3776c9aeb9a3 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.498628892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=238eeaac-7a80-4f1d-b5c3-e9cd7e93d5f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.499188630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721648692499156380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=238eeaac-7a80-4f1d-b5c3-e9cd7e93d5f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.500007602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6b99a04-7005-49b1-870c-7abad5f11e5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.500081073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6b99a04-7005-49b1-870c-7abad5f11e5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:44:52 kubernetes-upgrade-651148 crio[2133]: time="2024-07-22 11:44:52.500548411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b847addf058fa3f70b64546f533794637362fa6b86fbf75a6d9b7769a44731f,PodSandboxId:ae7a94f6a58fdb3ad836ff6c5026b5fd58f0dca5153ab1756997695ad3d4d395,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690136424797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-qw2r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae99364b-d72c-4efb-a8cc-378e63c276ed,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad750ea9572051d43d27055142330b43ee1843da75b9b0e74d44811a6ddda39,PodSandboxId:2bdda16abcb202ef9354da9059105a7b2d496767891bca42d853d326967bce2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721648690064958770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v4djk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 350c6f5f-fccb-4d0e-951d-5774c992dcf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6bf3b653d9dfc70530bf0ccc276ad08225108a9c953356afe9ebcaf90405ba,PodSandboxId:34bbdbe2892e154e80281792f4013e6937e741fe3f4077ed96f39b4ed048f688,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721648689723194539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0339f1f8453e8a2992455b590a12351f16bf70d8d83bf8f25e9cea47527cab4c,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNI
NG,CreatedAt:1721648688116663518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fef5319d8e61250849ce38fe0da6dab4831e04eadde692fb10ad0b6a58ac77,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,Cr
eatedAt:1721648684552979757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd8e82e4c0eee7f90de18bf2d485ea291c26cea00e617028aa82a232788b779,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNIN
G,CreatedAt:1721648684542082323,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5406f4b619f7fc20c1817bbe0d853afb4e503f84a76067c13f9ad9e95bdbbc5,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721648682716060861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26246094ac175a1803cde74c890546a517cc766e4450de9f0da6527128219af7,PodSandboxId:e3fcdea10ae964cb12cc794cdb164107f18a354c0e1a45696383bed055382eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:172
1648662052300269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a98807ddb1b42839182476d83f8eddb,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d75f28d3b3a34d41283c74e6731d331f92846fef2641e3f498d915a6c9fc89,PodSandboxId:3b6bc5cecee4f5f73b0e0326b3cebc4edd43b40b990a6dedbbcb82ec23a719fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,Crea
tedAt:1721648661898241627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3,PodSandboxId:b4c9391594a01d16cf2707f541684cce3a26708f09f41ce13733451b5e58dbda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721648658342375915,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 592f8ad75d019b3cda4224b5a9dfe5f6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0f7815ea5a8bf7e85cdba257f47784b24fd067edfd240fa4c46ee691c8596e,PodSandboxId:5d1ffd196124284da3e86baf6c44722424eda15f712729ea01aec83d27554894,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721648658381403366,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4dcaa06c1ee78c7865bda9445c4313d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1,PodSandboxId:ae0688403dbc7b18a7035998dfba5881f7bd3aeb228e815c8dc617c6545ddaba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721648658325493054,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-651148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8d93fe2975407c501fa7e711753d84,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb,PodSandboxId:98c7130b12621d0cfe4c5031860a4945b6136e6786aeb542b7a145c264ef3c31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721648564457314540,Labels:map[string]string{io.kubernetes.container.name: ku
be-proxy,io.kubernetes.pod.name: kube-proxy-lx4qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecadbc0a-bdbf-4011-8316-8eb84808b555,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6b99a04-7005-49b1-870c-7abad5f11e5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b847addf058f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   0                   ae7a94f6a58fd       coredns-5cfdc65f69-qw2r2
	2ad750ea95720       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   0                   2bdda16abcb20       coredns-5cfdc65f69-v4djk
	8a6bf3b653d9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       0                   34bbdbe2892e1       storage-provisioner
	0339f1f8453e8       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   4 seconds ago       Running             kube-scheduler            3                   b4c9391594a01       kube-scheduler-kubernetes-upgrade-651148
	e3fef5319d8e6       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   8 seconds ago       Running             kube-apiserver            3                   ae0688403dbc7       kube-apiserver-kubernetes-upgrade-651148
	4fd8e82e4c0ee       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   8 seconds ago       Running             kube-controller-manager   3                   e3fcdea10ae96       kube-controller-manager-kubernetes-upgrade-651148
	f5406f4b619f7       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 seconds ago       Running             etcd                      3                   5d1ffd1961242       etcd-kubernetes-upgrade-651148
	26246094ac175       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   30 seconds ago      Exited              kube-controller-manager   2                   e3fcdea10ae96       kube-controller-manager-kubernetes-upgrade-651148
	83d75f28d3b3a       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   30 seconds ago      Running             kube-proxy                2                   3b6bc5cecee4f       kube-proxy-lx4qg
	3e0f7815ea5a8       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   34 seconds ago      Exited              etcd                      2                   5d1ffd1961242       etcd-kubernetes-upgrade-651148
	c3656d9b58113       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   34 seconds ago      Exited              kube-scheduler            2                   b4c9391594a01       kube-scheduler-kubernetes-upgrade-651148
	3f87773073f4b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   34 seconds ago      Exited              kube-apiserver            2                   ae0688403dbc7       kube-apiserver-kubernetes-upgrade-651148
	67ae3de111a87       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   2 minutes ago       Exited              kube-proxy                1                   98c7130b12621       kube-proxy-lx4qg
	
	
	==> coredns [2ad750ea9572051d43d27055142330b43ee1843da75b9b0e74d44811a6ddda39] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2b847addf058fa3f70b64546f533794637362fa6b86fbf75a6d9b7769a44731f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-651148
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-651148
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-651148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:44:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:44:48 +0000   Mon, 22 Jul 2024 11:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:44:48 +0000   Mon, 22 Jul 2024 11:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:44:48 +0000   Mon, 22 Jul 2024 11:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:44:48 +0000   Mon, 22 Jul 2024 11:42:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    kubernetes-upgrade-651148
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17f2da4a164c46038a2d1d6596203edc
	  System UUID:                17f2da4a-164c-4603-8a2d-1d6596203edc
	  Boot ID:                    b90e9684-7fa6-4048-b8ba-fde2909e0b34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-qw2r2                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m11s
	  kube-system                 coredns-5cfdc65f69-v4djk                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m11s
	  kube-system                 etcd-kubernetes-upgrade-651148                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-apiserver-kubernetes-upgrade-651148             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-651148    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-proxy-lx4qg                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-scheduler-kubernetes-upgrade-651148             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m23s)  kubelet          Node kubernetes-upgrade-651148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m23s)  kubelet          Node kubernetes-upgrade-651148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m23s)  kubelet          Node kubernetes-upgrade-651148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m11s                  node-controller  Node kubernetes-upgrade-651148 event: Registered Node kubernetes-upgrade-651148 in Controller
	  Normal  CIDRAssignmentFailed     2m11s                  cidrAllocator    Node kubernetes-upgrade-651148 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           0s                     node-controller  Node kubernetes-upgrade-651148 event: Registered Node kubernetes-upgrade-651148 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.109666] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.055830] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064170] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.176639] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.163980] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.287496] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +4.177309] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +2.013797] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[  +0.067244] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.582665] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.101659] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.487003] systemd-fstab-generator[1648]: Ignoring "noauto" option for root device
	[  +0.137332] systemd-fstab-generator[1660]: Ignoring "noauto" option for root device
	[  +0.345867] systemd-fstab-generator[1790]: Ignoring "noauto" option for root device
	[  +0.292257] systemd-fstab-generator[1895]: Ignoring "noauto" option for root device
	[  +0.492130] systemd-fstab-generator[2037]: Ignoring "noauto" option for root device
	[Jul22 11:43] kauditd_printk_skb: 191 callbacks suppressed
	[Jul22 11:44] systemd-fstab-generator[2326]: Ignoring "noauto" option for root device
	[  +8.353425] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.928590] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +0.774394] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001542] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.820721] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	
	
	==> etcd [3e0f7815ea5a8bf7e85cdba257f47784b24fd067edfd240fa4c46ee691c8596e] <==
	{"level":"warn","ts":"2024-07-22T11:44:18.681466Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-22T11:44:18.681683Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.123:2380","--initial-cluster=kubernetes-upgrade-651148=https://192.168.39.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.123:2380","--name=kubernetes-upgrade-651148","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-22T11:44:18.681877Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-22T11:44:18.681918Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-22T11:44:18.681943Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2024-07-22T11:44:18.682Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:44:18.683206Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"]}
	{"level":"info","ts":"2024-07-22T11:44:18.683392Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.14","git-sha":"bf51a53a7","go-version":"go1.21.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-651148","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-22T11:44:18.707596Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.917223ms"}
	{"level":"info","ts":"2024-07-22T11:44:18.736977Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-22T11:44:18.74971Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","commit-index":351}
	{"level":"info","ts":"2024-07-22T11:44:18.749959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-22T11:44:18.750016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became follower at term 2"}
	{"level":"info","ts":"2024-07-22T11:44:18.750051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4c9b6dd9118b591e [peers: [], term: 2, commit: 351, applied: 0, lastindex: 351, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-22T11:44:18.752897Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-22T11:44:18.765109Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":344}
	
	
	==> etcd [f5406f4b619f7fc20c1817bbe0d853afb4e503f84a76067c13f9ad9e95bdbbc5] <==
	{"level":"info","ts":"2024-07-22T11:44:44.68137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e switched to configuration voters=(5520126547342350622)"}
	{"level":"info","ts":"2024-07-22T11:44:44.681436Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","added-peer-id":"4c9b6dd9118b591e","added-peer-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2024-07-22T11:44:44.68155Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:44:44.68159Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:44:44.692284Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:44:44.692491Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:44:44.69252Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:44:44.692621Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-22T11:44:44.692629Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-22T11:44:46.5476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T11:44:46.547766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:44:46.547827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2024-07-22T11:44:46.547868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T11:44:46.547918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-22T11:44:46.547946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2024-07-22T11:44:46.547971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-22T11:44:46.549317Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:kubernetes-upgrade-651148 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:44:46.549322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:44:46.549548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:44:46.54958Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:44:46.549385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:44:46.550654Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:44:46.551496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:44:46.550654Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:44:46.552547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	
	
	==> kernel <==
	 11:44:52 up 2 min,  0 users,  load average: 1.24, 0.46, 0.17
	Linux kubernetes-upgrade-651148 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f87773073f4b1a43c5d075611019881d08f08a603d25c7f99112d1f3954ffc1] <==
	I0722 11:44:18.640037       1 server.go:142] Version: v1.31.0-beta.0
	I0722 11:44:18.640092       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0722 11:44:19.133835       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:19.133957       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0722 11:44:19.135732       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0722 11:44:19.143387       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0722 11:44:19.143422       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0722 11:44:19.143610       1 instance.go:231] Using reconciler: lease
	I0722 11:44:19.144165       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0722 11:44:19.144412       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:20.135269       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:20.135317       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:20.144928       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:21.676347       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:21.715840       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:21.876218       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:23.844254       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:24.229096       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:24.479385       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:28.115675       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:28.603138       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:28.707355       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:34.999210       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:36.239186       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:44:36.394989       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e3fef5319d8e61250849ce38fe0da6dab4831e04eadde692fb10ad0b6a58ac77] <==
	I0722 11:44:47.861967       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 11:44:47.862335       1 aggregator.go:171] initial CRD sync complete...
	I0722 11:44:47.862392       1 autoregister_controller.go:144] Starting autoregister controller
	I0722 11:44:47.862419       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 11:44:47.862400       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0722 11:44:47.862468       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0722 11:44:47.862496       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 11:44:47.869446       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0722 11:44:47.876261       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 11:44:47.941533       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 11:44:47.949370       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 11:44:47.949403       1 policy_source.go:224] refreshing policies
	I0722 11:44:47.961894       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 11:44:47.962543       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 11:44:47.965763       1 cache.go:39] Caches are synced for autoregister controller
	I0722 11:44:47.969324       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0722 11:44:47.969407       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 11:44:48.780858       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:44:50.021416       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 11:44:50.067269       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 11:44:50.185449       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 11:44:50.226484       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 11:44:50.275301       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:44:50.297156       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:44:51.412085       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [26246094ac175a1803cde74c890546a517cc766e4450de9f0da6527128219af7] <==
	I0722 11:44:22.824546       1 serving.go:386] Generated self-signed cert in-memory
	I0722 11:44:23.131798       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0722 11:44:23.131880       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:44:23.133308       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 11:44:23.133512       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0722 11:44:23.133587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0722 11:44:23.133836       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0722 11:44:43.136630       1 controllermanager.go:233] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.123:8443/healthz\": dial tcp 192.168.39.123:8443: connect: connection refused"
	
	
	==> kube-controller-manager [4fd8e82e4c0eee7f90de18bf2d485ea291c26cea00e617028aa82a232788b779] <==
	I0722 11:44:51.569556       1 shared_informer.go:320] Caches are synced for cronjob
	I0722 11:44:51.573837       1 shared_informer.go:320] Caches are synced for job
	I0722 11:44:51.595082       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0722 11:44:51.614092       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0722 11:44:51.635000       1 shared_informer.go:320] Caches are synced for stateful set
	I0722 11:44:51.640354       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 11:44:51.659278       1 shared_informer.go:320] Caches are synced for daemon sets
	I0722 11:44:51.713370       1 shared_informer.go:320] Caches are synced for disruption
	I0722 11:44:51.744343       1 shared_informer.go:320] Caches are synced for deployment
	I0722 11:44:51.847293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="48.627642ms"
	I0722 11:44:51.847388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="49.183µs"
	I0722 11:44:52.234600       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 11:44:52.243955       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0722 11:44:52.265327       1 shared_informer.go:320] Caches are synced for taint
	I0722 11:44:52.265495       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0722 11:44:52.265591       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-651148"
	I0722 11:44:52.265643       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0722 11:44:52.313019       1 shared_informer.go:320] Caches are synced for namespace
	I0722 11:44:52.318981       1 shared_informer.go:320] Caches are synced for service account
	I0722 11:44:52.321492       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 11:44:52.329777       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 11:44:52.329814       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 11:44:52.355259       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 11:44:53.168389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="15.398085ms"
	I0722 11:44:53.168634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="140.492µs"
	
	
	==> kube-proxy [67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb": Process exited with status 1
	stdout:
	
	stderr:
	E0722 11:44:55.227266    3935 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="67ae3de111a87fbc9e416dd1eb599cab22ab4edd059ac64a6da8d853eb6cb0bb"
	time="2024-07-22T11:44:55Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	
	==> kube-proxy [83d75f28d3b3a34d41283c74e6731d331f92846fef2641e3f498d915a6c9fc89] <==
	E0722 11:44:22.068031       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 11:44:32.072752       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-651148\": net/http: TLS handshake timeout"
	E0722 11:44:39.968440       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-651148\": dial tcp 192.168.39.123:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.123:60034->192.168.39.123:8443: read: connection reset by peer"
	E0722 11:44:42.272946       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-651148\": dial tcp 192.168.39.123:8443: connect: connection refused"
	I0722 11:44:47.875485       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0722 11:44:47.875652       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 11:44:47.910669       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 11:44:47.910800       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:44:47.910837       1 server_linux.go:170] "Using iptables Proxier"
	I0722 11:44:47.913357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 11:44:47.913785       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 11:44:47.913836       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:44:47.915324       1 config.go:197] "Starting service config controller"
	I0722 11:44:47.915384       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:44:47.915428       1 config.go:104] "Starting endpoint slice config controller"
	I0722 11:44:47.915445       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:44:47.916148       1 config.go:326] "Starting node config controller"
	I0722 11:44:47.918788       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:44:48.016367       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:44:48.016479       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:44:48.019206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0339f1f8453e8a2992455b590a12351f16bf70d8d83bf8f25e9cea47527cab4c] <==
	I0722 11:44:48.597026       1 serving.go:386] Generated self-signed cert in-memory
	I0722 11:44:49.096544       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0722 11:44:49.096583       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:44:49.107157       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 11:44:49.107266       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0722 11:44:49.107301       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0722 11:44:49.107334       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0722 11:44:49.117002       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 11:44:49.117037       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 11:44:49.117056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0722 11:44:49.117062       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0722 11:44:49.207457       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0722 11:44:49.217974       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0722 11:44:49.218783       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3] <==
	I0722 11:44:19.045166       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jul 22 11:44:45 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:45.590305    3082 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-651148"
	Jul 22 11:44:47 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:47.933006    3082 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c7130b12621d0cfe4c5031860a4945b6136e6786aeb542b7a145c264ef3c31"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.022066    3082 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-651148"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.022127    3082 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-651148"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.022186    3082 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.023306    3082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.052424    3082 apiserver.go:52] "Watching apiserver"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.080526    3082 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: E0722 11:44:48.094980    3082 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-651148\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-651148"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.095275    3082 scope.go:117] "RemoveContainer" containerID="c3656d9b5811390b40bef7ec3b13e4d94c98eb6e68fae71d7e7853e50dcf4aa3"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.154589    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecadbc0a-bdbf-4011-8316-8eb84808b555-xtables-lock\") pod \"kube-proxy-lx4qg\" (UID: \"ecadbc0a-bdbf-4011-8316-8eb84808b555\") " pod="kube-system/kube-proxy-lx4qg"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:48.154772    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecadbc0a-bdbf-4011-8316-8eb84808b555-lib-modules\") pod \"kube-proxy-lx4qg\" (UID: \"ecadbc0a-bdbf-4011-8316-8eb84808b555\") " pod="kube-system/kube-proxy-lx4qg"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: E0722 11:44:48.980340    3082 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-651148\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-651148"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: E0722 11:44:48.980736    3082 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-651148\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-651148"
	Jul 22 11:44:48 kubernetes-upgrade-651148 kubelet[3082]: E0722 11:44:48.986944    3082 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-651148\" already exists" pod="kube-system/etcd-kubernetes-upgrade-651148"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.378379    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkzbd\" (UniqueName: \"kubernetes.io/projected/e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02-kube-api-access-lkzbd\") pod \"storage-provisioner\" (UID: \"e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02\") " pod="kube-system/storage-provisioner"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.378472    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4m9q\" (UniqueName: \"kubernetes.io/projected/350c6f5f-fccb-4d0e-951d-5774c992dcf5-kube-api-access-z4m9q\") pod \"coredns-5cfdc65f69-v4djk\" (UID: \"350c6f5f-fccb-4d0e-951d-5774c992dcf5\") " pod="kube-system/coredns-5cfdc65f69-v4djk"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.378825    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5lq\" (UniqueName: \"kubernetes.io/projected/ae99364b-d72c-4efb-a8cc-378e63c276ed-kube-api-access-5t5lq\") pod \"coredns-5cfdc65f69-qw2r2\" (UID: \"ae99364b-d72c-4efb-a8cc-378e63c276ed\") " pod="kube-system/coredns-5cfdc65f69-qw2r2"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.379132    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02-tmp\") pod \"storage-provisioner\" (UID: \"e6c9b27a-bfb2-4bf0-a7a3-6d8a8c858b02\") " pod="kube-system/storage-provisioner"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.379337    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae99364b-d72c-4efb-a8cc-378e63c276ed-config-volume\") pod \"coredns-5cfdc65f69-qw2r2\" (UID: \"ae99364b-d72c-4efb-a8cc-378e63c276ed\") " pod="kube-system/coredns-5cfdc65f69-qw2r2"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.379377    3082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350c6f5f-fccb-4d0e-951d-5774c992dcf5-config-volume\") pod \"coredns-5cfdc65f69-v4djk\" (UID: \"350c6f5f-fccb-4d0e-951d-5774c992dcf5\") " pod="kube-system/coredns-5cfdc65f69-v4djk"
	Jul 22 11:44:49 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:49.493793    3082 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jul 22 11:44:50 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:50.996942    3082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=130.996918049 podStartE2EDuration="2m10.996918049s" podCreationTimestamp="2024-07-22 11:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-22 11:44:50.009494341 +0000 UTC m=+6.059858244" watchObservedRunningTime="2024-07-22 11:44:50.996918049 +0000 UTC m=+7.047281982"
	Jul 22 11:44:51 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:51.011672    3082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-5cfdc65f69-v4djk" podStartSLOduration=130.011655414 podStartE2EDuration="2m10.011655414s" podCreationTimestamp="2024-07-22 11:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-22 11:44:50.997377622 +0000 UTC m=+7.047741523" watchObservedRunningTime="2024-07-22 11:44:51.011655414 +0000 UTC m=+7.062019312"
	Jul 22 11:44:51 kubernetes-upgrade-651148 kubelet[3082]: I0722 11:44:51.796493    3082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-5cfdc65f69-qw2r2" podStartSLOduration=130.796468631 podStartE2EDuration="2m10.796468631s" podCreationTimestamp="2024-07-22 11:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-22 11:44:51.012047643 +0000 UTC m=+7.062411544" watchObservedRunningTime="2024-07-22 11:44:51.796468631 +0000 UTC m=+7.846832513"
	
	
	==> storage-provisioner [8a6bf3b653d9dfc70530bf0ccc276ad08225108a9c953356afe9ebcaf90405ba] <==
	I0722 11:44:50.009882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:44:50.178938       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:44:50.179021       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:44:50.244045       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:44:50.244223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-651148_f522569e-f8aa-477d-8a71-7b2874915185!
	I0722 11:44:50.245089       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"822bc199-aca7-468f-97ae-aab6902b2d56", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-651148_f522569e-f8aa-477d-8a71-7b2874915185 became leader
	I0722 11:44:50.346041       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-651148_f522569e-f8aa-477d-8a71-7b2874915185!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:44:51.979468   57967 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19313-5960/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-651148 -n kubernetes-upgrade-651148
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-651148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-651148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-651148
--- FAIL: TestKubernetesUpgrade (474.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (265.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m25.193226601s)

                                                
                                                
-- stdout --
	* [old-k8s-version-101261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-101261" primary control-plane node in "old-k8s-version-101261" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:41:08.672466   55745 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:41:08.672610   55745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:41:08.672622   55745 out.go:304] Setting ErrFile to fd 2...
	I0722 11:41:08.672628   55745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:41:08.672903   55745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:41:08.673693   55745 out.go:298] Setting JSON to false
	I0722 11:41:08.675020   55745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5021,"bootTime":1721643448,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:41:08.675093   55745 start.go:139] virtualization: kvm guest
	I0722 11:41:08.677283   55745 out.go:177] * [old-k8s-version-101261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:41:08.678537   55745 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:41:08.678616   55745 notify.go:220] Checking for updates...
	I0722 11:41:08.681120   55745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:41:08.682479   55745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:41:08.683803   55745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:41:08.685069   55745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:41:08.686250   55745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:41:08.687890   55745 config.go:182] Loaded profile config "cert-expiration-467176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:41:08.688024   55745 config.go:182] Loaded profile config "cert-options-435680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:41:08.688142   55745 config.go:182] Loaded profile config "kubernetes-upgrade-651148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:41:08.688254   55745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:41:08.729938   55745 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 11:41:08.731188   55745 start.go:297] selected driver: kvm2
	I0722 11:41:08.731207   55745 start.go:901] validating driver "kvm2" against <nil>
	I0722 11:41:08.731221   55745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:41:08.732257   55745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:41:08.732352   55745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:41:08.750996   55745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:41:08.751051   55745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 11:41:08.751352   55745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:41:08.751396   55745 cni.go:84] Creating CNI manager for ""
	I0722 11:41:08.751406   55745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:41:08.751419   55745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 11:41:08.751511   55745 start.go:340] cluster config:
	{Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:41:08.751657   55745 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:41:08.753405   55745 out.go:177] * Starting "old-k8s-version-101261" primary control-plane node in "old-k8s-version-101261" cluster
	I0722 11:41:08.754704   55745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:41:08.754744   55745 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 11:41:08.754756   55745 cache.go:56] Caching tarball of preloaded images
	I0722 11:41:08.754842   55745 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:41:08.754854   55745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 11:41:08.754984   55745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:41:08.755011   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json: {Name:mkf08068672b537d68ff00f04b778294df433e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:08.755191   55745 start.go:360] acquireMachinesLock for old-k8s-version-101261: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:41:08.755244   55745 start.go:364] duration metric: took 29.944µs to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:41:08.755266   55745 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:41:08.755349   55745 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 11:41:08.757108   55745 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 11:41:08.757277   55745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:41:08.757326   55745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:41:08.774987   55745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0722 11:41:08.775492   55745 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:41:08.776114   55745 main.go:141] libmachine: Using API Version  1
	I0722 11:41:08.776135   55745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:41:08.776493   55745 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:41:08.776710   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:41:08.776860   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:08.777065   55745 start.go:159] libmachine.API.Create for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:41:08.777097   55745 client.go:168] LocalClient.Create starting
	I0722 11:41:08.777132   55745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 11:41:08.777172   55745 main.go:141] libmachine: Decoding PEM data...
	I0722 11:41:08.777193   55745 main.go:141] libmachine: Parsing certificate...
	I0722 11:41:08.777254   55745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 11:41:08.777279   55745 main.go:141] libmachine: Decoding PEM data...
	I0722 11:41:08.777295   55745 main.go:141] libmachine: Parsing certificate...
	I0722 11:41:08.777319   55745 main.go:141] libmachine: Running pre-create checks...
	I0722 11:41:08.777332   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .PreCreateCheck
	I0722 11:41:08.777727   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:41:08.778191   55745 main.go:141] libmachine: Creating machine...
	I0722 11:41:08.778207   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .Create
	I0722 11:41:08.778338   55745 main.go:141] libmachine: (old-k8s-version-101261) Creating KVM machine...
	I0722 11:41:08.779729   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found existing default KVM network
	I0722 11:41:08.781250   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:08.781103   55767 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:a5:f9} reservation:<nil>}
	I0722 11:41:08.782830   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:08.782716   55767 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010ff80}
	I0722 11:41:08.782899   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | created network xml: 
	I0722 11:41:08.782916   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | <network>
	I0722 11:41:08.782925   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   <name>mk-old-k8s-version-101261</name>
	I0722 11:41:08.782932   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   <dns enable='no'/>
	I0722 11:41:08.782941   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   
	I0722 11:41:08.782950   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0722 11:41:08.782969   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |     <dhcp>
	I0722 11:41:08.782981   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0722 11:41:08.782991   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |     </dhcp>
	I0722 11:41:08.782997   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   </ip>
	I0722 11:41:08.783005   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG |   
	I0722 11:41:08.783010   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | </network>
	I0722 11:41:08.783020   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | 
	I0722 11:41:08.789695   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | trying to create private KVM network mk-old-k8s-version-101261 192.168.50.0/24...
	I0722 11:41:08.871417   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261 ...
	I0722 11:41:08.871462   55745 main.go:141] libmachine: (old-k8s-version-101261) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 11:41:08.871485   55745 main.go:141] libmachine: (old-k8s-version-101261) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 11:41:08.871502   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | private KVM network mk-old-k8s-version-101261 192.168.50.0/24 created
	I0722 11:41:08.871518   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:08.871322   55767 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:41:09.148962   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:09.148809   55767 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa...
	I0722 11:41:09.392134   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:09.391986   55767 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/old-k8s-version-101261.rawdisk...
	I0722 11:41:09.392172   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Writing magic tar header
	I0722 11:41:09.392194   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Writing SSH key tar header
	I0722 11:41:09.392207   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:09.392129   55767 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261 ...
	I0722 11:41:09.392301   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261
	I0722 11:41:09.392335   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261 (perms=drwx------)
	I0722 11:41:09.392348   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 11:41:09.392359   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 11:41:09.392413   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 11:41:09.392445   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 11:41:09.392483   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:41:09.392506   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 11:41:09.392520   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 11:41:09.392541   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 11:41:09.392554   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home/jenkins
	I0722 11:41:09.392565   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Checking permissions on dir: /home
	I0722 11:41:09.392577   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Skipping /home - not owner
	I0722 11:41:09.392609   55745 main.go:141] libmachine: (old-k8s-version-101261) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 11:41:09.392625   55745 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:41:09.393621   55745 main.go:141] libmachine: (old-k8s-version-101261) define libvirt domain using xml: 
	I0722 11:41:09.393652   55745 main.go:141] libmachine: (old-k8s-version-101261) <domain type='kvm'>
	I0722 11:41:09.393662   55745 main.go:141] libmachine: (old-k8s-version-101261)   <name>old-k8s-version-101261</name>
	I0722 11:41:09.393677   55745 main.go:141] libmachine: (old-k8s-version-101261)   <memory unit='MiB'>2200</memory>
	I0722 11:41:09.393692   55745 main.go:141] libmachine: (old-k8s-version-101261)   <vcpu>2</vcpu>
	I0722 11:41:09.393702   55745 main.go:141] libmachine: (old-k8s-version-101261)   <features>
	I0722 11:41:09.393714   55745 main.go:141] libmachine: (old-k8s-version-101261)     <acpi/>
	I0722 11:41:09.393725   55745 main.go:141] libmachine: (old-k8s-version-101261)     <apic/>
	I0722 11:41:09.393741   55745 main.go:141] libmachine: (old-k8s-version-101261)     <pae/>
	I0722 11:41:09.393753   55745 main.go:141] libmachine: (old-k8s-version-101261)     
	I0722 11:41:09.393765   55745 main.go:141] libmachine: (old-k8s-version-101261)   </features>
	I0722 11:41:09.393775   55745 main.go:141] libmachine: (old-k8s-version-101261)   <cpu mode='host-passthrough'>
	I0722 11:41:09.393784   55745 main.go:141] libmachine: (old-k8s-version-101261)   
	I0722 11:41:09.393805   55745 main.go:141] libmachine: (old-k8s-version-101261)   </cpu>
	I0722 11:41:09.393817   55745 main.go:141] libmachine: (old-k8s-version-101261)   <os>
	I0722 11:41:09.393828   55745 main.go:141] libmachine: (old-k8s-version-101261)     <type>hvm</type>
	I0722 11:41:09.393840   55745 main.go:141] libmachine: (old-k8s-version-101261)     <boot dev='cdrom'/>
	I0722 11:41:09.393850   55745 main.go:141] libmachine: (old-k8s-version-101261)     <boot dev='hd'/>
	I0722 11:41:09.393863   55745 main.go:141] libmachine: (old-k8s-version-101261)     <bootmenu enable='no'/>
	I0722 11:41:09.393876   55745 main.go:141] libmachine: (old-k8s-version-101261)   </os>
	I0722 11:41:09.393908   55745 main.go:141] libmachine: (old-k8s-version-101261)   <devices>
	I0722 11:41:09.393940   55745 main.go:141] libmachine: (old-k8s-version-101261)     <disk type='file' device='cdrom'>
	I0722 11:41:09.393959   55745 main.go:141] libmachine: (old-k8s-version-101261)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/boot2docker.iso'/>
	I0722 11:41:09.393971   55745 main.go:141] libmachine: (old-k8s-version-101261)       <target dev='hdc' bus='scsi'/>
	I0722 11:41:09.393980   55745 main.go:141] libmachine: (old-k8s-version-101261)       <readonly/>
	I0722 11:41:09.393990   55745 main.go:141] libmachine: (old-k8s-version-101261)     </disk>
	I0722 11:41:09.394000   55745 main.go:141] libmachine: (old-k8s-version-101261)     <disk type='file' device='disk'>
	I0722 11:41:09.394016   55745 main.go:141] libmachine: (old-k8s-version-101261)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 11:41:09.394043   55745 main.go:141] libmachine: (old-k8s-version-101261)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/old-k8s-version-101261.rawdisk'/>
	I0722 11:41:09.394053   55745 main.go:141] libmachine: (old-k8s-version-101261)       <target dev='hda' bus='virtio'/>
	I0722 11:41:09.394067   55745 main.go:141] libmachine: (old-k8s-version-101261)     </disk>
	I0722 11:41:09.394075   55745 main.go:141] libmachine: (old-k8s-version-101261)     <interface type='network'>
	I0722 11:41:09.394087   55745 main.go:141] libmachine: (old-k8s-version-101261)       <source network='mk-old-k8s-version-101261'/>
	I0722 11:41:09.394098   55745 main.go:141] libmachine: (old-k8s-version-101261)       <model type='virtio'/>
	I0722 11:41:09.394110   55745 main.go:141] libmachine: (old-k8s-version-101261)     </interface>
	I0722 11:41:09.394121   55745 main.go:141] libmachine: (old-k8s-version-101261)     <interface type='network'>
	I0722 11:41:09.394131   55745 main.go:141] libmachine: (old-k8s-version-101261)       <source network='default'/>
	I0722 11:41:09.394141   55745 main.go:141] libmachine: (old-k8s-version-101261)       <model type='virtio'/>
	I0722 11:41:09.394153   55745 main.go:141] libmachine: (old-k8s-version-101261)     </interface>
	I0722 11:41:09.394164   55745 main.go:141] libmachine: (old-k8s-version-101261)     <serial type='pty'>
	I0722 11:41:09.394176   55745 main.go:141] libmachine: (old-k8s-version-101261)       <target port='0'/>
	I0722 11:41:09.394200   55745 main.go:141] libmachine: (old-k8s-version-101261)     </serial>
	I0722 11:41:09.394212   55745 main.go:141] libmachine: (old-k8s-version-101261)     <console type='pty'>
	I0722 11:41:09.394223   55745 main.go:141] libmachine: (old-k8s-version-101261)       <target type='serial' port='0'/>
	I0722 11:41:09.394234   55745 main.go:141] libmachine: (old-k8s-version-101261)     </console>
	I0722 11:41:09.394243   55745 main.go:141] libmachine: (old-k8s-version-101261)     <rng model='virtio'>
	I0722 11:41:09.394253   55745 main.go:141] libmachine: (old-k8s-version-101261)       <backend model='random'>/dev/random</backend>
	I0722 11:41:09.394269   55745 main.go:141] libmachine: (old-k8s-version-101261)     </rng>
	I0722 11:41:09.394281   55745 main.go:141] libmachine: (old-k8s-version-101261)     
	I0722 11:41:09.394290   55745 main.go:141] libmachine: (old-k8s-version-101261)     
	I0722 11:41:09.394300   55745 main.go:141] libmachine: (old-k8s-version-101261)   </devices>
	I0722 11:41:09.394307   55745 main.go:141] libmachine: (old-k8s-version-101261) </domain>
	I0722 11:41:09.394317   55745 main.go:141] libmachine: (old-k8s-version-101261) 
	I0722 11:41:09.397812   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:20:d0:cc in network default
	I0722 11:41:09.398345   55745 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:41:09.398372   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:09.398886   55745 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:41:09.399167   55745 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:41:09.399616   55745 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:41:09.400309   55745 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:41:10.746212   55745 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:41:10.746975   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:10.747398   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:10.747429   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:10.747376   55767 retry.go:31] will retry after 304.507247ms: waiting for machine to come up
	I0722 11:41:11.053863   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:11.054370   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:11.054395   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:11.054330   55767 retry.go:31] will retry after 349.842586ms: waiting for machine to come up
	I0722 11:41:11.405850   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:11.406331   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:11.406359   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:11.406283   55767 retry.go:31] will retry after 477.196921ms: waiting for machine to come up
	I0722 11:41:11.885065   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:11.885612   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:11.885640   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:11.885567   55767 retry.go:31] will retry after 432.069094ms: waiting for machine to come up
	I0722 11:41:12.318981   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:12.319665   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:12.319693   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:12.319614   55767 retry.go:31] will retry after 665.03878ms: waiting for machine to come up
	I0722 11:41:12.986364   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:12.986911   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:12.986941   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:12.986860   55767 retry.go:31] will retry after 631.844142ms: waiting for machine to come up
	I0722 11:41:13.620340   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:13.620761   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:13.620791   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:13.620722   55767 retry.go:31] will retry after 1.115172243s: waiting for machine to come up
	I0722 11:41:14.737377   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:14.737792   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:14.737815   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:14.737758   55767 retry.go:31] will retry after 1.27680297s: waiting for machine to come up
	I0722 11:41:16.016331   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:16.016935   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:16.016966   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:16.016863   55767 retry.go:31] will retry after 1.675507518s: waiting for machine to come up
	I0722 11:41:17.693831   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:17.694448   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:17.694484   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:17.694392   55767 retry.go:31] will retry after 1.58500428s: waiting for machine to come up
	I0722 11:41:19.280898   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:19.281502   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:19.281565   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:19.281458   55767 retry.go:31] will retry after 2.231096441s: waiting for machine to come up
	I0722 11:41:21.514159   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:21.514585   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:21.514615   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:21.514528   55767 retry.go:31] will retry after 3.483044023s: waiting for machine to come up
	I0722 11:41:24.999993   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:25.000525   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:41:25.000550   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:41:25.000485   55767 retry.go:31] will retry after 3.433084672s: waiting for machine to come up
	I0722 11:41:28.437469   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.438145   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.438208   55745 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:41:28.438233   55745 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:41:28.438610   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261
	I0722 11:41:28.519697   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:41:28.519730   55745 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:41:28.519750   55745 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:41:28.522379   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.522899   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:28.522941   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.523023   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:41:28.523052   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:41:28.523099   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:41:28.523117   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:41:28.523127   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:41:28.652524   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:41:28.652763   55745 main.go:141] libmachine: (old-k8s-version-101261) KVM machine creation complete!
	I0722 11:41:28.653054   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:41:28.653586   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:28.653787   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:28.653943   55745 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 11:41:28.653956   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:41:28.655206   55745 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 11:41:28.655220   55745 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 11:41:28.655228   55745 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 11:41:28.655236   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:28.657542   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.657935   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:28.657963   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.658105   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:28.658283   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.658445   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.658564   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:28.658748   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:28.658969   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:28.658982   55745 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 11:41:28.775586   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:41:28.775611   55745 main.go:141] libmachine: Detecting the provisioner...
	I0722 11:41:28.775621   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:28.778384   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.778704   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:28.778732   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.778878   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:28.779067   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.779206   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.779333   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:28.779488   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:28.779640   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:28.779650   55745 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 11:41:28.889057   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 11:41:28.889149   55745 main.go:141] libmachine: found compatible host: buildroot
	I0722 11:41:28.889162   55745 main.go:141] libmachine: Provisioning with buildroot...
	I0722 11:41:28.889174   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:41:28.889440   55745 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:41:28.889465   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:41:28.889607   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:28.891903   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.892243   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:28.892266   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:28.892398   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:28.892556   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.892684   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:28.892789   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:28.892951   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:28.893118   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:28.893130   55745 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:41:29.018321   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:41:29.018353   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:29.020992   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.021350   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.021389   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.021580   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:29.021780   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.021952   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.022108   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:29.022272   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:29.022474   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:29.022505   55745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:41:29.140785   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:41:29.140811   55745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:41:29.140831   55745 buildroot.go:174] setting up certificates
	I0722 11:41:29.140841   55745 provision.go:84] configureAuth start
	I0722 11:41:29.140853   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:41:29.141119   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:41:29.143665   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.144066   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.144084   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.144224   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:29.146290   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.146603   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.146635   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.146769   55745 provision.go:143] copyHostCerts
	I0722 11:41:29.146827   55745 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:41:29.146839   55745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:41:29.146915   55745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:41:29.146997   55745 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:41:29.147005   55745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:41:29.147023   55745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:41:29.147081   55745 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:41:29.147088   55745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:41:29.147105   55745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:41:29.147147   55745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:41:29.477369   55745 provision.go:177] copyRemoteCerts
	I0722 11:41:29.477426   55745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:41:29.477465   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:29.480203   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.480575   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.480610   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.480750   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:29.480936   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.481091   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:29.481261   55745 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:41:29.566881   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:41:29.590788   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:41:29.613489   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:41:29.636914   55745 provision.go:87] duration metric: took 496.062226ms to configureAuth
	I0722 11:41:29.636941   55745 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:41:29.637076   55745 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:41:29.637136   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:29.639412   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.639699   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.639726   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.639892   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:29.640103   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.640278   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.640453   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:29.640639   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:29.640834   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:29.640850   55745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:41:29.911956   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:41:29.911980   55745 main.go:141] libmachine: Checking connection to Docker...
	I0722 11:41:29.911988   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetURL
	I0722 11:41:29.913283   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using libvirt version 6000000
	I0722 11:41:29.915294   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.915621   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.915667   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.915824   55745 main.go:141] libmachine: Docker is up and running!
	I0722 11:41:29.915839   55745 main.go:141] libmachine: Reticulating splines...
	I0722 11:41:29.915846   55745 client.go:171] duration metric: took 21.138738979s to LocalClient.Create
	I0722 11:41:29.915877   55745 start.go:167] duration metric: took 21.13880562s to libmachine.API.Create "old-k8s-version-101261"
	I0722 11:41:29.915893   55745 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:41:29.915909   55745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:41:29.915925   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:29.916132   55745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:41:29.916154   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:29.918248   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.918517   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:29.918546   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:29.918656   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:29.918815   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:29.918957   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:29.919096   55745 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:41:30.007161   55745 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:41:30.011532   55745 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:41:30.011552   55745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:41:30.011606   55745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:41:30.011712   55745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:41:30.011833   55745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:41:30.021177   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:41:30.045476   55745 start.go:296] duration metric: took 129.571112ms for postStartSetup
	I0722 11:41:30.045533   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:41:30.046100   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:41:30.048500   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.048838   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:30.048859   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.049162   55745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:41:30.049398   55745 start.go:128] duration metric: took 21.294036936s to createHost
	I0722 11:41:30.049428   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:30.051665   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.052012   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:30.052043   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.052172   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:30.052372   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:30.052557   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:30.052719   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:30.052883   55745 main.go:141] libmachine: Using SSH client type: native
	I0722 11:41:30.053082   55745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:41:30.053094   55745 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 11:41:30.164721   55745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721648490.134733613
	
	I0722 11:41:30.164743   55745 fix.go:216] guest clock: 1721648490.134733613
	I0722 11:41:30.164750   55745 fix.go:229] Guest: 2024-07-22 11:41:30.134733613 +0000 UTC Remote: 2024-07-22 11:41:30.049414602 +0000 UTC m=+21.427208854 (delta=85.319011ms)
	I0722 11:41:30.164785   55745 fix.go:200] guest clock delta is within tolerance: 85.319011ms
	I0722 11:41:30.164792   55745 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 21.409537006s
	I0722 11:41:30.164827   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:30.165051   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:41:30.167650   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.168030   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:30.168057   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.168208   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:30.168679   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:30.168853   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:41:30.168941   55745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:41:30.168985   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:30.169069   55745 ssh_runner.go:195] Run: cat /version.json
	I0722 11:41:30.169092   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:41:30.171622   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.171974   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.172019   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:30.172045   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.172205   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:30.172397   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:30.172564   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:30.172612   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:30.172634   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:30.172707   55745 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:41:30.172799   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:41:30.172936   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:41:30.173065   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:41:30.173219   55745 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:41:30.278465   55745 ssh_runner.go:195] Run: systemctl --version
	I0722 11:41:30.284733   55745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:41:30.450634   55745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:41:30.457196   55745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:41:30.457270   55745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:41:30.481510   55745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:41:30.481537   55745 start.go:495] detecting cgroup driver to use...
	I0722 11:41:30.481619   55745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:41:30.502441   55745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:41:30.515809   55745 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:41:30.515853   55745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:41:30.529754   55745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:41:30.545476   55745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:41:30.680962   55745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:41:30.835193   55745 docker.go:233] disabling docker service ...
	I0722 11:41:30.835246   55745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:41:30.853799   55745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:41:30.866668   55745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:41:31.011041   55745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:41:31.150521   55745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:41:31.164081   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:41:31.182375   55745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:41:31.182438   55745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:41:31.192594   55745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:41:31.192650   55745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:41:31.203323   55745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:41:31.213328   55745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:41:31.223725   55745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:41:31.236113   55745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:41:31.245773   55745 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:41:31.245829   55745 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:41:31.260067   55745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:41:31.269570   55745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:41:31.395416   55745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:41:31.540817   55745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:41:31.540912   55745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:41:31.545664   55745 start.go:563] Will wait 60s for crictl version
	I0722 11:41:31.545724   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:31.549280   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:41:31.596199   55745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:41:31.596291   55745 ssh_runner.go:195] Run: crio --version
	I0722 11:41:31.631851   55745 ssh_runner.go:195] Run: crio --version
	I0722 11:41:31.661902   55745 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:41:31.663153   55745 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:41:31.666326   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:31.666814   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:41:23 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:41:31.666866   55745 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:41:31.667220   55745 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:41:31.671480   55745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:41:31.683865   55745 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:41:31.683996   55745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:41:31.684051   55745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:41:31.715808   55745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:41:31.715886   55745 ssh_runner.go:195] Run: which lz4
	I0722 11:41:31.720355   55745 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 11:41:31.724333   55745 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:41:31.724368   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:41:33.411957   55745 crio.go:462] duration metric: took 1.691633489s to copy over tarball
	I0722 11:41:33.412033   55745 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:41:36.004744   55745 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.59267833s)
	I0722 11:41:36.004778   55745 crio.go:469] duration metric: took 2.592793925s to extract the tarball
	I0722 11:41:36.004786   55745 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:41:36.049470   55745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:41:36.094634   55745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:41:36.094658   55745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:41:36.094741   55745 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:41:36.094775   55745 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:41:36.094979   55745 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:41:36.095000   55745 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:41:36.095086   55745 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:41:36.094981   55745 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:41:36.094983   55745 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:41:36.094737   55745 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:41:36.096208   55745 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:41:36.096248   55745 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:41:36.096209   55745 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:41:36.096274   55745 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:41:36.096284   55745 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:41:36.096210   55745 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:41:36.096302   55745 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:41:36.096312   55745 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:41:36.263510   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:41:36.265196   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:41:36.266546   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:41:36.275758   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:41:36.284304   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:41:36.293462   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:41:36.359241   55745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:41:36.359300   55745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:41:36.359355   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.385172   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:41:36.404187   55745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:41:36.404236   55745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:41:36.404287   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.412757   55745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:41:36.427827   55745 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:41:36.427870   55745 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:41:36.427899   55745 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:41:36.427903   55745 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:41:36.427947   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.427987   55745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:41:36.427947   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.428023   55745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:41:36.428084   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.470200   55745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:41:36.470249   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:41:36.470254   55745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:41:36.470282   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.496118   55745 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:41:36.496174   55745 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:41:36.496197   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:41:36.496215   55745 ssh_runner.go:195] Run: which crictl
	I0722 11:41:36.608882   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:41:36.608941   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:41:36.608958   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:41:36.609032   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:41:36.609033   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:41:36.609068   55745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:41:36.609091   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:41:36.737963   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:41:36.737980   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:41:36.738035   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:41:36.738082   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:41:36.738147   55745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:41:36.738183   55745 cache_images.go:92] duration metric: took 643.512118ms to LoadCachedImages
	W0722 11:41:36.738239   55745 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0722 11:41:36.738251   55745 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:41:36.738372   55745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:41:36.738445   55745 ssh_runner.go:195] Run: crio config
	I0722 11:41:36.790707   55745 cni.go:84] Creating CNI manager for ""
	I0722 11:41:36.790736   55745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:41:36.790750   55745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:41:36.790766   55745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:41:36.790901   55745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:41:36.790961   55745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:41:36.801815   55745 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:41:36.801912   55745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:41:36.812596   55745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:41:36.829924   55745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:41:36.848510   55745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:41:36.866574   55745 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:41:36.870598   55745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:41:36.884141   55745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:41:37.011261   55745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:41:37.029583   55745 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:41:37.029604   55745 certs.go:194] generating shared ca certs ...
	I0722 11:41:37.029624   55745 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.029789   55745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:41:37.029840   55745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:41:37.029852   55745 certs.go:256] generating profile certs ...
	I0722 11:41:37.029914   55745 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:41:37.029932   55745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.crt with IP's: []
	I0722 11:41:37.204999   55745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.crt ...
	I0722 11:41:37.205035   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.crt: {Name:mkff86fd37df28e2f2c7c96fd028952c59b00293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.205222   55745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key ...
	I0722 11:41:37.205248   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key: {Name:mkd8750fd68e67f4122af9bc367d32d6fd6cb774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.205353   55745 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:41:37.205375   55745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt.455618c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.51]
	I0722 11:41:37.582514   55745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt.455618c3 ...
	I0722 11:41:37.582545   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt.455618c3: {Name:mk14868bcbaa78ca65d613f3eaf6854d6eee1f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.582713   55745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3 ...
	I0722 11:41:37.582728   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3: {Name:mk68fbb88c60be4a8b5351c16fdb8c2282090c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.582795   55745 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt.455618c3 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt
	I0722 11:41:37.582862   55745 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key
	I0722 11:41:37.582915   55745 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:41:37.582930   55745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt with IP's: []
	I0722 11:41:37.642750   55745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt ...
	I0722 11:41:37.642774   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt: {Name:mk97d22708e509be77571d259f5197995be66fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.642919   55745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key ...
	I0722 11:41:37.642931   55745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key: {Name:mk480b866b25c15691d1645d68c70b1e30760092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:41:37.643110   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:41:37.643148   55745 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:41:37.643165   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:41:37.643188   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:41:37.643210   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:41:37.643230   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:41:37.643266   55745 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:41:37.643833   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:41:37.670662   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:41:37.694220   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:41:37.717209   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:41:37.740672   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:41:37.764691   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:41:37.788902   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:41:37.812352   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:41:37.835363   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:41:37.860787   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:41:37.885327   55745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:41:37.925237   55745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:41:37.948649   55745 ssh_runner.go:195] Run: openssl version
	I0722 11:41:37.954532   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:41:37.973172   55745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:41:37.979108   55745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:41:37.979174   55745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:41:37.985802   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:41:37.996168   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:41:38.006828   55745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:41:38.011138   55745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:41:38.011192   55745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:41:38.016859   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:41:38.027545   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:41:38.037854   55745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:41:38.042153   55745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:41:38.042215   55745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:41:38.048045   55745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:41:38.060410   55745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:41:38.064882   55745 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 11:41:38.064959   55745 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:41:38.065063   55745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:41:38.065138   55745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:41:38.108911   55745 cri.go:89] found id: ""
	I0722 11:41:38.108982   55745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:41:38.119824   55745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:41:38.129585   55745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:41:38.139297   55745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:41:38.139320   55745 kubeadm.go:157] found existing configuration files:
	
	I0722 11:41:38.139366   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:41:38.148779   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:41:38.148837   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:41:38.158541   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:41:38.167745   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:41:38.167797   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:41:38.177499   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:41:38.186990   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:41:38.187056   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:41:38.196915   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:41:38.206405   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:41:38.206450   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:41:38.217221   55745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:41:38.340151   55745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:41:38.340309   55745 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:41:38.492826   55745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:41:38.492994   55745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:41:38.493168   55745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:41:38.675803   55745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:41:38.755348   55745 out.go:204]   - Generating certificates and keys ...
	I0722 11:41:38.755482   55745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:41:38.755630   55745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:41:38.810869   55745 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 11:41:38.922818   55745 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 11:41:39.044038   55745 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 11:41:39.447666   55745 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 11:41:39.770338   55745 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 11:41:39.770647   55745 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	I0722 11:41:40.008166   55745 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 11:41:40.008435   55745 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	I0722 11:41:40.320175   55745 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 11:41:40.551350   55745 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 11:41:40.649220   55745 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 11:41:40.649490   55745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:41:40.750407   55745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:41:40.924713   55745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:41:41.223779   55745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:41:41.305629   55745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:41:41.322525   55745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:41:41.324879   55745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:41:41.325063   55745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:41:41.443995   55745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:41:41.445583   55745 out.go:204]   - Booting up control plane ...
	I0722 11:41:41.445717   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:41:41.465778   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:41:41.467134   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:41:41.468850   55745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:41:41.472571   55745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:42:21.465376   55745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:42:21.466360   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:42:21.466615   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:42:26.467241   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:42:26.467578   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:42:36.466519   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:42:36.466739   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:42:56.465610   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:42:56.465819   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:43:36.467508   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:43:36.467757   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:43:36.467772   55745 kubeadm.go:310] 
	I0722 11:43:36.467815   55745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:43:36.467890   55745 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:43:36.467924   55745 kubeadm.go:310] 
	I0722 11:43:36.467979   55745 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:43:36.468023   55745 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:43:36.468171   55745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:43:36.468186   55745 kubeadm.go:310] 
	I0722 11:43:36.468333   55745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:43:36.468413   55745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:43:36.468460   55745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:43:36.468476   55745 kubeadm.go:310] 
	I0722 11:43:36.468647   55745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:43:36.468780   55745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:43:36.468800   55745 kubeadm.go:310] 
	I0722 11:43:36.468981   55745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:43:36.469128   55745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:43:36.469228   55745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:43:36.469436   55745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:43:36.469462   55745 kubeadm.go:310] 
	I0722 11:43:36.470017   55745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:43:36.470130   55745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:43:36.470224   55745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 11:43:36.470385   55745 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-101261] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:43:36.470448   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:43:36.948327   55745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:43:36.964309   55745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:43:36.977739   55745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:43:36.977757   55745 kubeadm.go:157] found existing configuration files:
	
	I0722 11:43:36.977799   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:43:36.990338   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:43:36.990406   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:43:37.000554   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:43:37.010564   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:43:37.010634   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:43:37.021699   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:43:37.034478   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:43:37.034535   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:43:37.047497   55745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:43:37.058817   55745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:43:37.058869   55745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:43:37.071355   55745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:43:37.155275   55745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:43:37.155474   55745 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:43:37.305952   55745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:43:37.306139   55745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:43:37.306282   55745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:43:37.474827   55745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:43:37.476598   55745 out.go:204]   - Generating certificates and keys ...
	I0722 11:43:37.476768   55745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:43:37.476928   55745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:43:37.477140   55745 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:43:37.477296   55745 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:43:37.477481   55745 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:43:37.477633   55745 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:43:37.477796   55745 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:43:37.477961   55745 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:43:37.478195   55745 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:43:37.478390   55745 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:43:37.478786   55745 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:43:37.478887   55745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:43:37.529823   55745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:43:37.600191   55745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:43:37.876685   55745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:43:37.990967   55745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:43:38.010792   55745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:43:38.012471   55745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:43:38.012576   55745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:43:38.141445   55745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:43:38.143270   55745 out.go:204]   - Booting up control plane ...
	I0722 11:43:38.143389   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:43:38.143487   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:43:38.144463   55745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:43:38.145792   55745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:43:38.149351   55745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:44:18.153777   55745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:44:18.153883   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:44:18.154173   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:44:23.154210   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:44:23.154491   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:44:33.155535   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:44:33.155753   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:44:53.154768   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:44:53.155065   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:45:33.153963   55745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:45:33.154222   55745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:45:33.154239   55745 kubeadm.go:310] 
	I0722 11:45:33.154307   55745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:45:33.154352   55745 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:45:33.154360   55745 kubeadm.go:310] 
	I0722 11:45:33.154420   55745 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:45:33.154476   55745 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:45:33.154612   55745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:45:33.154622   55745 kubeadm.go:310] 
	I0722 11:45:33.154794   55745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:45:33.154868   55745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:45:33.154930   55745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:45:33.154940   55745 kubeadm.go:310] 
	I0722 11:45:33.155055   55745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:45:33.155167   55745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:45:33.155176   55745 kubeadm.go:310] 
	I0722 11:45:33.155301   55745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:45:33.155412   55745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:45:33.155531   55745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:45:33.155625   55745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:45:33.155636   55745 kubeadm.go:310] 
	I0722 11:45:33.156634   55745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:45:33.156782   55745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:45:33.156894   55745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:45:33.156974   55745 kubeadm.go:394] duration metric: took 3m55.09201942s to StartCluster
	I0722 11:45:33.157048   55745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:45:33.157133   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:45:33.213447   55745 cri.go:89] found id: ""
	I0722 11:45:33.213482   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.213493   55745 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:45:33.213502   55745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:45:33.213564   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:45:33.254628   55745 cri.go:89] found id: ""
	I0722 11:45:33.254659   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.254669   55745 logs.go:278] No container was found matching "etcd"
	I0722 11:45:33.254677   55745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:45:33.254743   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:45:33.305173   55745 cri.go:89] found id: ""
	I0722 11:45:33.305196   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.305203   55745 logs.go:278] No container was found matching "coredns"
	I0722 11:45:33.305210   55745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:45:33.305266   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:45:33.346167   55745 cri.go:89] found id: ""
	I0722 11:45:33.346197   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.346212   55745 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:45:33.346220   55745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:45:33.346284   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:45:33.387495   55745 cri.go:89] found id: ""
	I0722 11:45:33.387522   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.387529   55745 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:45:33.387536   55745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:45:33.387587   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:45:33.426118   55745 cri.go:89] found id: ""
	I0722 11:45:33.426149   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.426159   55745 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:45:33.426167   55745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:45:33.426245   55745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:45:33.462394   55745 cri.go:89] found id: ""
	I0722 11:45:33.462425   55745 logs.go:276] 0 containers: []
	W0722 11:45:33.462437   55745 logs.go:278] No container was found matching "kindnet"
	I0722 11:45:33.462447   55745 logs.go:123] Gathering logs for kubelet ...
	I0722 11:45:33.462462   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:45:33.514012   55745 logs.go:123] Gathering logs for dmesg ...
	I0722 11:45:33.514035   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:45:33.530645   55745 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:45:33.530673   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:45:33.650829   55745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:45:33.650853   55745 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:45:33.650867   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:45:33.753292   55745 logs.go:123] Gathering logs for container status ...
	I0722 11:45:33.753327   55745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0722 11:45:33.799846   55745 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:45:33.799891   55745 out.go:239] * 
	* 
	W0722 11:45:33.799959   55745 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:45:33.799987   55745 out.go:239] * 
	* 
	W0722 11:45:33.800902   55745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:45:33.803908   55745 out.go:177] 
	W0722 11:45:33.805063   55745 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:45:33.805117   55745 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:45:33.805142   55745 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:45:33.806528   55745 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 6 (252.796575ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:45:34.092042   58607 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-101261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (265.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-339929 --alsologtostderr -v=3
E0722 11:43:29.088032   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-339929 --alsologtostderr -v=3: exit status 82 (2m0.852434487s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-339929"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:43:28.771199   57183 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:43:28.771460   57183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:43:28.771469   57183 out.go:304] Setting ErrFile to fd 2...
	I0722 11:43:28.771475   57183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:43:28.771701   57183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:43:28.771944   57183 out.go:298] Setting JSON to false
	I0722 11:43:28.772039   57183 mustload.go:65] Loading cluster: no-preload-339929
	I0722 11:43:28.772486   57183 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:43:28.772580   57183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:43:28.772773   57183 mustload.go:65] Loading cluster: no-preload-339929
	I0722 11:43:28.772919   57183 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:43:28.772962   57183 stop.go:39] StopHost: no-preload-339929
	I0722 11:43:28.773331   57183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:43:28.773380   57183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:43:28.788211   57183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0722 11:43:28.788607   57183 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:43:28.789096   57183 main.go:141] libmachine: Using API Version  1
	I0722 11:43:28.789116   57183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:43:28.789403   57183 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:43:28.791509   57183 out.go:177] * Stopping node "no-preload-339929"  ...
	I0722 11:43:28.792889   57183 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 11:43:28.792917   57183 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:43:28.793148   57183 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 11:43:28.793174   57183 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:43:28.796024   57183 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:43:28.796445   57183 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:41:45 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:43:28.796474   57183 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:43:28.796613   57183 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:43:28.796766   57183 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:43:28.796940   57183 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:43:28.797050   57183 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:43:28.878683   57183 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 11:43:28.931863   57183 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 11:43:28.996203   57183 main.go:141] libmachine: Stopping "no-preload-339929"...
	I0722 11:43:28.996230   57183 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:43:28.997917   57183 main.go:141] libmachine: (no-preload-339929) Calling .Stop
	I0722 11:43:29.001682   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 0/120
	I0722 11:43:30.003259   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 1/120
	I0722 11:43:31.004494   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 2/120
	I0722 11:43:32.005738   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 3/120
	I0722 11:43:33.006994   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 4/120
	I0722 11:43:34.008716   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 5/120
	I0722 11:43:35.010931   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 6/120
	I0722 11:43:36.012446   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 7/120
	I0722 11:43:37.013801   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 8/120
	I0722 11:43:38.015102   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 9/120
	I0722 11:43:39.017244   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 10/120
	I0722 11:43:40.018909   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 11/120
	I0722 11:43:41.020279   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 12/120
	I0722 11:43:42.021731   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 13/120
	I0722 11:43:43.192711   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 14/120
	I0722 11:43:44.194661   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 15/120
	I0722 11:43:45.196038   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 16/120
	I0722 11:43:46.197370   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 17/120
	I0722 11:43:47.198676   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 18/120
	I0722 11:43:48.200087   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 19/120
	I0722 11:43:49.202437   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 20/120
	I0722 11:43:50.203797   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 21/120
	I0722 11:43:51.205215   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 22/120
	I0722 11:43:52.207345   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 23/120
	I0722 11:43:53.208843   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 24/120
	I0722 11:43:54.210760   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 25/120
	I0722 11:43:55.212010   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 26/120
	I0722 11:43:56.213458   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 27/120
	I0722 11:43:57.214631   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 28/120
	I0722 11:43:58.215908   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 29/120
	I0722 11:43:59.217944   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 30/120
	I0722 11:44:00.219224   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 31/120
	I0722 11:44:01.220517   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 32/120
	I0722 11:44:02.222896   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 33/120
	I0722 11:44:03.224355   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 34/120
	I0722 11:44:04.226364   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 35/120
	I0722 11:44:05.227675   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 36/120
	I0722 11:44:06.229116   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 37/120
	I0722 11:44:07.230458   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 38/120
	I0722 11:44:08.231814   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 39/120
	I0722 11:44:09.233847   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 40/120
	I0722 11:44:10.235749   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 41/120
	I0722 11:44:11.237161   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 42/120
	I0722 11:44:12.238900   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 43/120
	I0722 11:44:13.240166   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 44/120
	I0722 11:44:14.242606   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 45/120
	I0722 11:44:15.244574   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 46/120
	I0722 11:44:16.246970   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 47/120
	I0722 11:44:17.248283   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 48/120
	I0722 11:44:18.249842   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 49/120
	I0722 11:44:19.252270   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 50/120
	I0722 11:44:20.253773   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 51/120
	I0722 11:44:21.255170   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 52/120
	I0722 11:44:22.256455   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 53/120
	I0722 11:44:23.257753   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 54/120
	I0722 11:44:24.259102   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 55/120
	I0722 11:44:25.260555   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 56/120
	I0722 11:44:26.262990   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 57/120
	I0722 11:44:27.264615   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 58/120
	I0722 11:44:28.266633   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 59/120
	I0722 11:44:29.268778   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 60/120
	I0722 11:44:30.270088   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 61/120
	I0722 11:44:31.271386   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 62/120
	I0722 11:44:32.273422   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 63/120
	I0722 11:44:33.274994   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 64/120
	I0722 11:44:34.276830   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 65/120
	I0722 11:44:35.278309   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 66/120
	I0722 11:44:36.279534   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 67/120
	I0722 11:44:37.281051   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 68/120
	I0722 11:44:38.282615   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 69/120
	I0722 11:44:39.284216   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 70/120
	I0722 11:44:40.286076   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 71/120
	I0722 11:44:41.287422   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 72/120
	I0722 11:44:42.288867   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 73/120
	I0722 11:44:43.290993   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 74/120
	I0722 11:44:44.292915   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 75/120
	I0722 11:44:45.294994   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 76/120
	I0722 11:44:46.296415   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 77/120
	I0722 11:44:47.297714   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 78/120
	I0722 11:44:48.299540   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 79/120
	I0722 11:44:49.301776   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 80/120
	I0722 11:44:50.303131   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 81/120
	I0722 11:44:51.304606   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 82/120
	I0722 11:44:52.306888   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 83/120
	I0722 11:44:53.308458   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 84/120
	I0722 11:44:54.310557   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 85/120
	I0722 11:44:55.312132   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 86/120
	I0722 11:44:56.440641   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 87/120
	I0722 11:44:57.442451   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 88/120
	I0722 11:44:58.443944   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 89/120
	I0722 11:44:59.446258   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 90/120
	I0722 11:45:00.447757   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 91/120
	I0722 11:45:01.449394   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 92/120
	I0722 11:45:02.450676   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 93/120
	I0722 11:45:03.452926   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 94/120
	I0722 11:45:04.454589   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 95/120
	I0722 11:45:05.456016   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 96/120
	I0722 11:45:06.458037   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 97/120
	I0722 11:45:07.459438   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 98/120
	I0722 11:45:08.461713   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 99/120
	I0722 11:45:09.463538   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 100/120
	I0722 11:45:10.464936   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 101/120
	I0722 11:45:11.466119   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 102/120
	I0722 11:45:12.467617   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 103/120
	I0722 11:45:13.469020   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 104/120
	I0722 11:45:14.470819   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 105/120
	I0722 11:45:15.472601   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 106/120
	I0722 11:45:16.474170   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 107/120
	I0722 11:45:17.475453   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 108/120
	I0722 11:45:18.476787   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 109/120
	I0722 11:45:19.479172   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 110/120
	I0722 11:45:20.480621   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 111/120
	I0722 11:45:21.481890   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 112/120
	I0722 11:45:22.483363   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 113/120
	I0722 11:45:23.484715   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 114/120
	I0722 11:45:24.486801   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 115/120
	I0722 11:45:25.489117   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 116/120
	I0722 11:45:26.491123   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 117/120
	I0722 11:45:27.492859   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 118/120
	I0722 11:45:28.495103   57183 main.go:141] libmachine: (no-preload-339929) Waiting for machine to stop 119/120
	I0722 11:45:29.495743   57183 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 11:45:29.495808   57183 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 11:45:29.546060   57183 out.go:177] 
	W0722 11:45:29.558514   57183 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 11:45:29.558543   57183 out.go:239] * 
	* 
	W0722 11:45:29.563221   57183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:45:29.567373   57183 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-339929 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929: exit status 3 (18.4365003s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:45:48.020681   58560 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host
	E0722 11:45:48.020700   58560 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-339929" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-802149 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-802149 --alsologtostderr -v=3: exit status 82 (2m0.600284359s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-802149"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:44:53.084052   58039 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:44:53.084351   58039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:44:53.084369   58039 out.go:304] Setting ErrFile to fd 2...
	I0722 11:44:53.084376   58039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:44:53.084678   58039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:44:53.085037   58039 out.go:298] Setting JSON to false
	I0722 11:44:53.085160   58039 mustload.go:65] Loading cluster: embed-certs-802149
	I0722 11:44:53.085632   58039 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:44:53.085741   58039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:44:53.086022   58039 mustload.go:65] Loading cluster: embed-certs-802149
	I0722 11:44:53.086185   58039 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:44:53.086238   58039 stop.go:39] StopHost: embed-certs-802149
	I0722 11:44:53.086845   58039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:44:53.086910   58039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:44:53.101558   58039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0722 11:44:53.102098   58039 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:44:53.102843   58039 main.go:141] libmachine: Using API Version  1
	I0722 11:44:53.102866   58039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:44:53.103257   58039 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:44:53.105508   58039 out.go:177] * Stopping node "embed-certs-802149"  ...
	I0722 11:44:53.107031   58039 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 11:44:53.107060   58039 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:44:53.107306   58039 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 11:44:53.107333   58039 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:44:53.110198   58039 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:53.110581   58039 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:43:57 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:44:53.110626   58039 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:44:53.110773   58039 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:44:53.110951   58039 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:44:53.111115   58039 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:44:53.111256   58039 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:44:53.205846   58039 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 11:44:53.272006   58039 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 11:44:53.331465   58039 main.go:141] libmachine: Stopping "embed-certs-802149"...
	I0722 11:44:53.331520   58039 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:44:53.333133   58039 main.go:141] libmachine: (embed-certs-802149) Calling .Stop
	I0722 11:44:53.337186   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 0/120
	I0722 11:44:54.338406   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 1/120
	I0722 11:44:55.339846   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 2/120
	I0722 11:44:56.440766   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 3/120
	I0722 11:44:57.442966   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 4/120
	I0722 11:44:58.444454   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 5/120
	I0722 11:44:59.446032   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 6/120
	I0722 11:45:00.447430   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 7/120
	I0722 11:45:01.448792   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 8/120
	I0722 11:45:02.450439   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 9/120
	I0722 11:45:03.452657   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 10/120
	I0722 11:45:04.454265   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 11/120
	I0722 11:45:05.455849   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 12/120
	I0722 11:45:06.457703   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 13/120
	I0722 11:45:07.459263   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 14/120
	I0722 11:45:08.461063   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 15/120
	I0722 11:45:09.462844   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 16/120
	I0722 11:45:10.464612   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 17/120
	I0722 11:45:11.465835   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 18/120
	I0722 11:45:12.467475   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 19/120
	I0722 11:45:13.469516   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 20/120
	I0722 11:45:14.471013   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 21/120
	I0722 11:45:15.472313   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 22/120
	I0722 11:45:16.473767   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 23/120
	I0722 11:45:17.475169   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 24/120
	I0722 11:45:18.477043   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 25/120
	I0722 11:45:19.479118   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 26/120
	I0722 11:45:20.480626   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 27/120
	I0722 11:45:21.481964   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 28/120
	I0722 11:45:22.483821   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 29/120
	I0722 11:45:23.485677   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 30/120
	I0722 11:45:24.487692   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 31/120
	I0722 11:45:25.489267   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 32/120
	I0722 11:45:26.491570   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 33/120
	I0722 11:45:27.493576   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 34/120
	I0722 11:45:28.495237   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 35/120
	I0722 11:45:29.497412   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 36/120
	I0722 11:45:30.498748   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 37/120
	I0722 11:45:31.500200   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 38/120
	I0722 11:45:32.501846   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 39/120
	I0722 11:45:33.503940   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 40/120
	I0722 11:45:34.505175   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 41/120
	I0722 11:45:35.506854   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 42/120
	I0722 11:45:36.508155   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 43/120
	I0722 11:45:37.509351   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 44/120
	I0722 11:45:38.511062   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 45/120
	I0722 11:45:39.512498   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 46/120
	I0722 11:45:40.513722   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 47/120
	I0722 11:45:41.515037   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 48/120
	I0722 11:45:42.517197   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 49/120
	I0722 11:45:43.519355   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 50/120
	I0722 11:45:44.520627   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 51/120
	I0722 11:45:45.522146   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 52/120
	I0722 11:45:46.523547   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 53/120
	I0722 11:45:47.524996   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 54/120
	I0722 11:45:48.526595   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 55/120
	I0722 11:45:49.528250   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 56/120
	I0722 11:45:50.529857   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 57/120
	I0722 11:45:51.531895   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 58/120
	I0722 11:45:52.533447   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 59/120
	I0722 11:45:53.536001   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 60/120
	I0722 11:45:54.537433   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 61/120
	I0722 11:45:55.538748   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 62/120
	I0722 11:45:56.540143   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 63/120
	I0722 11:45:57.541396   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 64/120
	I0722 11:45:58.543414   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 65/120
	I0722 11:45:59.544862   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 66/120
	I0722 11:46:00.547129   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 67/120
	I0722 11:46:01.548361   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 68/120
	I0722 11:46:02.549852   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 69/120
	I0722 11:46:03.551949   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 70/120
	I0722 11:46:04.553345   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 71/120
	I0722 11:46:05.555167   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 72/120
	I0722 11:46:06.556507   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 73/120
	I0722 11:46:07.558059   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 74/120
	I0722 11:46:08.559843   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 75/120
	I0722 11:46:09.561385   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 76/120
	I0722 11:46:10.563335   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 77/120
	I0722 11:46:11.564752   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 78/120
	I0722 11:46:12.566855   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 79/120
	I0722 11:46:13.569014   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 80/120
	I0722 11:46:14.570265   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 81/120
	I0722 11:46:15.571737   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 82/120
	I0722 11:46:16.573222   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 83/120
	I0722 11:46:17.574615   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 84/120
	I0722 11:46:18.576664   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 85/120
	I0722 11:46:19.578257   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 86/120
	I0722 11:46:20.580367   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 87/120
	I0722 11:46:21.581783   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 88/120
	I0722 11:46:22.583168   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 89/120
	I0722 11:46:23.585355   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 90/120
	I0722 11:46:24.587813   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 91/120
	I0722 11:46:25.588886   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 92/120
	I0722 11:46:26.590454   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 93/120
	I0722 11:46:27.591619   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 94/120
	I0722 11:46:28.593325   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 95/120
	I0722 11:46:29.594850   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 96/120
	I0722 11:46:30.596010   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 97/120
	I0722 11:46:31.597349   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 98/120
	I0722 11:46:32.598679   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 99/120
	I0722 11:46:33.600511   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 100/120
	I0722 11:46:34.601922   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 101/120
	I0722 11:46:35.603668   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 102/120
	I0722 11:46:36.605030   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 103/120
	I0722 11:46:37.606386   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 104/120
	I0722 11:46:38.608445   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 105/120
	I0722 11:46:39.609803   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 106/120
	I0722 11:46:40.611123   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 107/120
	I0722 11:46:41.612427   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 108/120
	I0722 11:46:42.613493   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 109/120
	I0722 11:46:43.615402   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 110/120
	I0722 11:46:44.617390   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 111/120
	I0722 11:46:45.618655   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 112/120
	I0722 11:46:46.619926   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 113/120
	I0722 11:46:47.621143   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 114/120
	I0722 11:46:48.622949   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 115/120
	I0722 11:46:49.624240   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 116/120
	I0722 11:46:50.625457   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 117/120
	I0722 11:46:51.626773   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 118/120
	I0722 11:46:52.628098   58039 main.go:141] libmachine: (embed-certs-802149) Waiting for machine to stop 119/120
	I0722 11:46:53.628689   58039 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 11:46:53.628760   58039 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 11:46:53.630454   58039 out.go:177] 
	W0722 11:46:53.631602   58039 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 11:46:53.631619   58039 out.go:239] * 
	* 
	W0722 11:46:53.634925   58039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:46:53.636154   58039 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-802149 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149: exit status 3 (18.60690328s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:47:12.244659   59250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host
	E0722 11:47:12.244677   59250 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-802149" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-101261 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-101261 create -f testdata/busybox.yaml: exit status 1 (50.464574ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-101261" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-101261 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 6 (224.111278ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:45:34.368424   58644 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-101261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 6 (225.683831ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:45:34.592874   58673 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-101261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-101261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-101261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.021378775s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-101261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-101261 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-101261 describe deploy/metrics-server -n kube-system: exit status 1 (41.917282ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-101261" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-101261 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 6 (215.078198ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:47:30.872007   59539 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-101261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929: exit status 3 (3.171631286s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:45:51.192845   58781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host
	E0722 11:45:51.192902   58781 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-339929 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-339929 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149120107s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-339929 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929: exit status 3 (3.062259288s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:46:00.404654   58891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host
	E0722 11:46:00.404674   58891 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.112:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-339929" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-605740 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-605740 --alsologtostderr -v=3: exit status 82 (2m0.493668398s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-605740"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:46:44.358229   59199 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:46:44.358332   59199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:46:44.358341   59199 out.go:304] Setting ErrFile to fd 2...
	I0722 11:46:44.358345   59199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:46:44.358524   59199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:46:44.358766   59199 out.go:298] Setting JSON to false
	I0722 11:46:44.358864   59199 mustload.go:65] Loading cluster: default-k8s-diff-port-605740
	I0722 11:46:44.359222   59199 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:46:44.359295   59199 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:46:44.359472   59199 mustload.go:65] Loading cluster: default-k8s-diff-port-605740
	I0722 11:46:44.359590   59199 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:46:44.359621   59199 stop.go:39] StopHost: default-k8s-diff-port-605740
	I0722 11:46:44.359989   59199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:46:44.360047   59199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:46:44.374275   59199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37775
	I0722 11:46:44.374668   59199 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:46:44.375274   59199 main.go:141] libmachine: Using API Version  1
	I0722 11:46:44.375294   59199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:46:44.375604   59199 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:46:44.377614   59199 out.go:177] * Stopping node "default-k8s-diff-port-605740"  ...
	I0722 11:46:44.379017   59199 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 11:46:44.379057   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:46:44.379261   59199 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 11:46:44.379280   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:46:44.382190   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:46:44.382564   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:45:11 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:46:44.382592   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:46:44.382735   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:46:44.382928   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:46:44.383070   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:46:44.383184   59199 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:46:44.489094   59199 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 11:46:44.555554   59199 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 11:46:44.616155   59199 main.go:141] libmachine: Stopping "default-k8s-diff-port-605740"...
	I0722 11:46:44.616186   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:46:44.617668   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Stop
	I0722 11:46:44.620923   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 0/120
	I0722 11:46:45.622484   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 1/120
	I0722 11:46:46.623703   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 2/120
	I0722 11:46:47.624730   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 3/120
	I0722 11:46:48.626578   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 4/120
	I0722 11:46:49.628315   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 5/120
	I0722 11:46:50.629220   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 6/120
	I0722 11:46:51.630601   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 7/120
	I0722 11:46:52.631442   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 8/120
	I0722 11:46:53.632680   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 9/120
	I0722 11:46:54.635064   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 10/120
	I0722 11:46:55.636293   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 11/120
	I0722 11:46:56.637761   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 12/120
	I0722 11:46:57.638979   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 13/120
	I0722 11:46:58.640557   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 14/120
	I0722 11:46:59.642460   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 15/120
	I0722 11:47:00.643651   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 16/120
	I0722 11:47:01.645145   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 17/120
	I0722 11:47:02.646763   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 18/120
	I0722 11:47:03.648190   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 19/120
	I0722 11:47:04.650411   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 20/120
	I0722 11:47:05.651782   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 21/120
	I0722 11:47:06.653174   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 22/120
	I0722 11:47:07.654948   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 23/120
	I0722 11:47:08.656178   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 24/120
	I0722 11:47:09.658021   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 25/120
	I0722 11:47:10.659393   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 26/120
	I0722 11:47:11.660626   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 27/120
	I0722 11:47:12.661968   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 28/120
	I0722 11:47:13.663425   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 29/120
	I0722 11:47:14.665403   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 30/120
	I0722 11:47:15.666894   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 31/120
	I0722 11:47:16.668032   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 32/120
	I0722 11:47:17.670147   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 33/120
	I0722 11:47:18.671856   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 34/120
	I0722 11:47:19.673608   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 35/120
	I0722 11:47:20.675027   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 36/120
	I0722 11:47:21.676092   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 37/120
	I0722 11:47:22.677322   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 38/120
	I0722 11:47:23.678712   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 39/120
	I0722 11:47:24.680951   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 40/120
	I0722 11:47:25.682511   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 41/120
	I0722 11:47:26.683865   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 42/120
	I0722 11:47:27.685208   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 43/120
	I0722 11:47:28.686520   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 44/120
	I0722 11:47:29.688455   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 45/120
	I0722 11:47:30.689611   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 46/120
	I0722 11:47:31.690969   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 47/120
	I0722 11:47:32.692525   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 48/120
	I0722 11:47:33.694237   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 49/120
	I0722 11:47:34.696808   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 50/120
	I0722 11:47:35.699234   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 51/120
	I0722 11:47:36.700720   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 52/120
	I0722 11:47:37.702183   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 53/120
	I0722 11:47:38.703437   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 54/120
	I0722 11:47:39.705642   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 55/120
	I0722 11:47:40.706978   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 56/120
	I0722 11:47:41.708303   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 57/120
	I0722 11:47:42.709691   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 58/120
	I0722 11:47:43.711239   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 59/120
	I0722 11:47:44.713205   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 60/120
	I0722 11:47:45.714623   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 61/120
	I0722 11:47:46.715995   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 62/120
	I0722 11:47:47.717529   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 63/120
	I0722 11:47:48.718830   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 64/120
	I0722 11:47:49.720554   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 65/120
	I0722 11:47:50.721996   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 66/120
	I0722 11:47:51.723219   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 67/120
	I0722 11:47:52.724783   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 68/120
	I0722 11:47:53.726214   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 69/120
	I0722 11:47:54.728418   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 70/120
	I0722 11:47:55.729922   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 71/120
	I0722 11:47:56.731354   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 72/120
	I0722 11:47:57.732981   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 73/120
	I0722 11:47:58.734378   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 74/120
	I0722 11:47:59.736414   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 75/120
	I0722 11:48:00.737746   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 76/120
	I0722 11:48:01.739269   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 77/120
	I0722 11:48:02.740657   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 78/120
	I0722 11:48:03.742059   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 79/120
	I0722 11:48:04.744172   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 80/120
	I0722 11:48:05.745647   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 81/120
	I0722 11:48:06.746989   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 82/120
	I0722 11:48:07.748296   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 83/120
	I0722 11:48:08.749685   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 84/120
	I0722 11:48:09.751404   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 85/120
	I0722 11:48:10.752810   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 86/120
	I0722 11:48:11.754181   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 87/120
	I0722 11:48:12.755679   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 88/120
	I0722 11:48:13.757174   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 89/120
	I0722 11:48:14.759284   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 90/120
	I0722 11:48:15.760670   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 91/120
	I0722 11:48:16.762791   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 92/120
	I0722 11:48:17.764080   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 93/120
	I0722 11:48:18.765376   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 94/120
	I0722 11:48:19.767409   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 95/120
	I0722 11:48:20.768773   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 96/120
	I0722 11:48:21.770334   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 97/120
	I0722 11:48:22.771742   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 98/120
	I0722 11:48:23.773075   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 99/120
	I0722 11:48:24.775260   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 100/120
	I0722 11:48:25.776543   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 101/120
	I0722 11:48:26.777760   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 102/120
	I0722 11:48:27.779025   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 103/120
	I0722 11:48:28.780296   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 104/120
	I0722 11:48:29.782280   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 105/120
	I0722 11:48:30.783754   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 106/120
	I0722 11:48:31.785129   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 107/120
	I0722 11:48:32.786473   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 108/120
	I0722 11:48:33.787787   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 109/120
	I0722 11:48:34.789925   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 110/120
	I0722 11:48:35.791359   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 111/120
	I0722 11:48:36.792697   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 112/120
	I0722 11:48:37.795002   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 113/120
	I0722 11:48:38.796331   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 114/120
	I0722 11:48:39.798155   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 115/120
	I0722 11:48:40.799651   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 116/120
	I0722 11:48:41.801077   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 117/120
	I0722 11:48:42.802368   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 118/120
	I0722 11:48:43.803724   59199 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for machine to stop 119/120
	I0722 11:48:44.804765   59199 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 11:48:44.804827   59199 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 11:48:44.806453   59199 out.go:177] 
	W0722 11:48:44.807530   59199 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 11:48:44.807547   59199 out.go:239] * 
	* 
	W0722 11:48:44.811024   59199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:48:44.812310   59199 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-605740 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740: exit status 3 (18.534957424s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:49:03.348703   59966 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0722 11:49:03.348722   59966 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-605740" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149: exit status 3 (3.167957948s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:47:15.412696   59344 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host
	E0722 11:47:15.412721   59344 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-802149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-802149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152312874s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-802149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149: exit status 3 (3.063542473s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:47:24.628672   59447 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host
	E0722 11:47:24.628691   59447 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.113:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-802149" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (710.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0722 11:47:59.660208   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 11:48:29.087854   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m46.994099472s)

                                                
                                                
-- stdout --
	* [old-k8s-version-101261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-101261" primary control-plane node in "old-k8s-version-101261" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:47:35.409814   59674 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:47:35.410067   59674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:47:35.410076   59674 out.go:304] Setting ErrFile to fd 2...
	I0722 11:47:35.410080   59674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:47:35.410250   59674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:47:35.410736   59674 out.go:298] Setting JSON to false
	I0722 11:47:35.411579   59674 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5407,"bootTime":1721643448,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:47:35.411631   59674 start.go:139] virtualization: kvm guest
	I0722 11:47:35.414020   59674 out.go:177] * [old-k8s-version-101261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:47:35.415307   59674 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:47:35.415349   59674 notify.go:220] Checking for updates...
	I0722 11:47:35.417723   59674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:47:35.419006   59674 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:47:35.420295   59674 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:47:35.421513   59674 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:47:35.422615   59674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:47:35.424043   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:47:35.424475   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:47:35.424546   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:47:35.440163   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0722 11:47:35.440493   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:47:35.440971   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:47:35.440994   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:47:35.441306   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:47:35.441460   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:47:35.442934   59674 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 11:47:35.444069   59674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:47:35.444341   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:47:35.444375   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:47:35.459702   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0722 11:47:35.460081   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:47:35.460505   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:47:35.460534   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:47:35.460824   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:47:35.461013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:47:35.494972   59674 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:47:35.496135   59674 start.go:297] selected driver: kvm2
	I0722 11:47:35.496151   59674 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:47:35.496271   59674 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:47:35.496943   59674 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:47:35.497010   59674 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:47:35.511547   59674 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:47:35.511888   59674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:47:35.511923   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:47:35.511931   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:47:35.511972   59674 start.go:340] cluster config:
	{Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:47:35.512071   59674 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:47:35.513524   59674 out.go:177] * Starting "old-k8s-version-101261" primary control-plane node in "old-k8s-version-101261" cluster
	I0722 11:47:35.514692   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:47:35.514715   59674 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 11:47:35.514729   59674 cache.go:56] Caching tarball of preloaded images
	I0722 11:47:35.514787   59674 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:47:35.514797   59674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 11:47:35.514873   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:47:35.515033   59674 start.go:360] acquireMachinesLock for old-k8s-version-101261: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	* 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	* 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-101261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (235.611579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25: (1.49276243s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.086147744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649564086129843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d3d89b7-4e55-440e-bd11-bdedaad6e640 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.086753430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65e99dbe-b687-4688-9b42-7e7a96d3a47a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.086846328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65e99dbe-b687-4688-9b42-7e7a96d3a47a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.086899875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=65e99dbe-b687-4688-9b42-7e7a96d3a47a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.123014354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66c9cba7-045a-4848-bc87-af8df8ce3ba0 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.123100843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66c9cba7-045a-4848-bc87-af8df8ce3ba0 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.124345045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4265ae45-d77c-4a0d-be21-7f70729e4adb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.124777353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649564124753101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4265ae45-d77c-4a0d-be21-7f70729e4adb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.125232254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60677eca-7b93-40f9-a0b4-1652aade2c3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.125304467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60677eca-7b93-40f9-a0b4-1652aade2c3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.125341097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60677eca-7b93-40f9-a0b4-1652aade2c3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.163764415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d8f3fa1-bf55-48b8-9e5c-794723441309 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.163880020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d8f3fa1-bf55-48b8-9e5c-794723441309 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.165984779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178157bb-731e-4d4e-a06e-88ca1297fd26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.166565359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649564166543692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178157bb-731e-4d4e-a06e-88ca1297fd26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.167076273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71bdf827-ff4d-418b-9a53-8592fea65d81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.167156259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71bdf827-ff4d-418b-9a53-8592fea65d81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.167219077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71bdf827-ff4d-418b-9a53-8592fea65d81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.198879731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4a79f3b-3a6c-4758-99e7-14cfb1336bc2 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.198976797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4a79f3b-3a6c-4758-99e7-14cfb1336bc2 name=/runtime.v1.RuntimeService/Version
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.200331979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06f41265-5a14-4c62-beb9-43bdcc26edb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.200742857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649564200718816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06f41265-5a14-4c62-beb9-43bdcc26edb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.201464594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3f50a9c-76e8-4a88-b291-2f108d3dd944 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.201530729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3f50a9c-76e8-4a88-b291-2f108d3dd944 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 11:59:24 old-k8s-version-101261 crio[646]: time="2024-07-22 11:59:24.201563932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f3f50a9c-76e8-4a88-b291-2f108d3dd944 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050630] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664885] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.301657] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.299545] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.059053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064893] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.225240] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.133946] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.249574] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +5.972877] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.060881] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.615774] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +12.639328] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 11:55] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Jul22 11:57] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.065899] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:59:24 up 8 min,  0 users,  load average: 0.15, 0.11, 0.06
	Linux old-k8s-version-101261 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: goroutine 148 [runnable]:
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0005a3500)
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: goroutine 149 [select]:
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0009b2d70, 0xc000a10d01, 0xc000915780, 0xc000943c80, 0xc000a36200, 0xc000a361c0)
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000a10de0, 0x0, 0x0)
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0005a3500)
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 22 11:59:21 old-k8s-version-101261 kubelet[5479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 22 11:59:21 old-k8s-version-101261 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 22 11:59:21 old-k8s-version-101261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 22 11:59:22 old-k8s-version-101261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 22 11:59:22 old-k8s-version-101261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 11:59:22 old-k8s-version-101261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 11:59:22 old-k8s-version-101261 kubelet[5546]: I0722 11:59:22.313890    5546 server.go:416] Version: v1.20.0
	Jul 22 11:59:22 old-k8s-version-101261 kubelet[5546]: I0722 11:59:22.314116    5546 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 11:59:22 old-k8s-version-101261 kubelet[5546]: I0722 11:59:22.315981    5546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 11:59:22 old-k8s-version-101261 kubelet[5546]: I0722 11:59:22.317278    5546 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 22 11:59:22 old-k8s-version-101261 kubelet[5546]: W0722 11:59:22.317382    5546 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (230.150154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-101261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (710.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740: exit status 3 (3.167758095s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:49:06.516736   60109 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0722 11:49:06.516759   60109 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-605740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-605740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153783819s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-605740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740: exit status 3 (3.061733327s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 11:49:15.732687   60195 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0722 11:49:15.732709   60195 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-605740" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0722 11:56:36.611119   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-802149 -n embed-certs-802149
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:05:24.306294558 +0000 UTC m=+5795.033708900
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-802149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-802149 logs -n 25: (1.93520463s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.716516745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649925716494356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=832b4f95-8481-48a6-9645-e90954d2bb31 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.717171340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f6ffa88-9f54-400a-a362-83358e917e5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.717225659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f6ffa88-9f54-400a-a362-83358e917e5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.717478445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f6ffa88-9f54-400a-a362-83358e917e5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.755897903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fcfb445-c0d1-4922-b449-58344fc34333 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.755968779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fcfb445-c0d1-4922-b449-58344fc34333 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.757110167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52c06396-6c1a-43c4-a587-93667abd648d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.757695108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649925757671128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52c06396-6c1a-43c4-a587-93667abd648d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.758122273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74b62a9d-4fae-4a50-95a3-18133bf6b80a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.758172548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74b62a9d-4fae-4a50-95a3-18133bf6b80a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.758390109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74b62a9d-4fae-4a50-95a3-18133bf6b80a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.799065652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ab0cbb9-1646-4887-9689-d38e70a88303 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.799154333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ab0cbb9-1646-4887-9689-d38e70a88303 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.800719416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03275bc8-7b39-4a41-bf2f-bcadff678b5d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.801139546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649925801110916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03275bc8-7b39-4a41-bf2f-bcadff678b5d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.801996952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0daf0c3-9fe2-40a9-8cf9-19f287a85d78 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.802310960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0daf0c3-9fe2-40a9-8cf9-19f287a85d78 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.802517306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0daf0c3-9fe2-40a9-8cf9-19f287a85d78 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.834449470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=628b511e-493e-4d47-8337-2c344d2c0254 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.834515284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=628b511e-493e-4d47-8337-2c344d2c0254 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.835970898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a5173b3-3991-4cab-9bf1-3229d8627403 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.836403928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649925836384334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a5173b3-3991-4cab-9bf1-3229d8627403 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.836957904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e87206e-2a1f-4a8c-9a67-67a0ccb0d4ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.837022340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e87206e-2a1f-4a8c-9a67-67a0ccb0d4ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:05:25 embed-certs-802149 crio[722]: time="2024-07-22 12:05:25.837253936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e87206e-2a1f-4a8c-9a67-67a0ccb0d4ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fbcf2083c04a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   90478a6390f86       coredns-7db6d8ff4d-c2dkr
	43f8eb6548b82       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a55c29b432502       coredns-7db6d8ff4d-kz8d9
	d24677b58b615       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a4b1d613d74c7       storage-provisioner
	4d8b4ca43b70a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   01187af5ae6ef       kube-proxy-w89tg
	68eba96f7ba02       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   a104b9d640286       kube-scheduler-embed-certs-802149
	10e2be9e61df1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   da9b9182195cc       etcd-embed-certs-802149
	f1f94464010f7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   3de43b37dd3d9       kube-controller-manager-embed-certs-802149
	7ff4c87c40e71       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   8181b461637c5       kube-apiserver-embed-certs-802149
	
	
	==> coredns [43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-802149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-802149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=embed-certs-802149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:56:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-802149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:05:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:01:32 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:01:32 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:01:32 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:01:32 +0000   Mon, 22 Jul 2024 11:56:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.113
	  Hostname:    embed-certs-802149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8766530bf8c84d62a77555a63c00c03f
	  System UUID:                8766530b-f8c8-4d62-a775-55a63c00c03f
	  Boot ID:                    d82689a1-9245-4021-98d4-b2fe0c418ca5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-c2dkr                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-kz8d9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-802149                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-802149             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-embed-certs-802149    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-w89tg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-802149             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-569cc877fc-88d4n               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m28s  kubelet          Node embed-certs-802149 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s  kubelet          Node embed-certs-802149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s  kubelet          Node embed-certs-802149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s  kubelet          Node embed-certs-802149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-802149 event: Registered Node embed-certs-802149 in Controller
	
	
	==> dmesg <==
	[  +0.049789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040325] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.479305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.146525] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.066304] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.061388] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067153] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.219458] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.117690] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.284661] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul22 11:51] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.064511] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.850546] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.639504] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.560214] kauditd_printk_skb: 84 callbacks suppressed
	[Jul22 11:55] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.763530] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[Jul22 11:56] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.595929] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +14.862534] systemd-fstab-generator[4087]: Ignoring "noauto" option for root device
	[  +0.106761] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:57] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22] <==
	{"level":"info","ts":"2024-07-22T11:56:00.075513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 switched to configuration voters=(7778615406434507872)"}
	{"level":"info","ts":"2024-07-22T11:56:00.075644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"19cf5c6a1483664a","local-member-id":"6bf3317fd0e8dc60","added-peer-id":"6bf3317fd0e8dc60","added-peer-peer-urls":["https://192.168.72.113:2380"]}
	{"level":"info","ts":"2024-07-22T11:56:00.084957Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:56:00.085341Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6bf3317fd0e8dc60","initial-advertise-peer-urls":["https://192.168.72.113:2380"],"listen-peer-urls":["https://192.168.72.113:2380"],"advertise-client-urls":["https://192.168.72.113:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.113:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:56:00.085378Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:56:00.085496Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.113:2380"}
	{"level":"info","ts":"2024-07-22T11:56:00.085526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.113:2380"}
	{"level":"info","ts":"2024-07-22T11:56:00.330453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:00.330698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:00.330823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 received MsgPreVoteResp from 6bf3317fd0e8dc60 at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:00.330922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.330951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 received MsgVoteResp from 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.331033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.331063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6bf3317fd0e8dc60 elected leader 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.335784Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6bf3317fd0e8dc60","local-member-attributes":"{Name:embed-certs-802149 ClientURLs:[https://192.168.72.113:2379]}","request-path":"/0/members/6bf3317fd0e8dc60/attributes","cluster-id":"19cf5c6a1483664a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:56:00.337317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:00.337766Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.339451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:00.342908Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.113:2379"}
	{"level":"info","ts":"2024-07-22T11:56:00.355417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"19cf5c6a1483664a","local-member-id":"6bf3317fd0e8dc60","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.355512Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.355552Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.356863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:56:00.357157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:00.382329Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:05:26 up 14 min,  0 users,  load average: 0.13, 0.11, 0.08
	Linux embed-certs-802149 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba] <==
	I0722 11:59:21.428527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:01:02.262890       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:01:02.262992       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 12:01:03.263197       1 handler_proxy.go:93] no RequestInfo found in the context
	W0722 12:01:03.263385       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:01:03.263590       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:01:03.263661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0722 12:01:03.263624       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:01:03.265619       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:02:03.264785       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:02:03.264837       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:02:03.264846       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:02:03.265922       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:02:03.266024       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:02:03.266052       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:04:03.265041       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:04:03.265135       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:04:03.265144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:04:03.266361       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:04:03.266586       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:04:03.266649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b] <==
	I0722 11:59:57.858602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="100.497µs"
	E0722 12:00:18.374867       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:00:18.885477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:00:48.379228       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:00:48.893714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:01:18.385027       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:01:18.901971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:01:48.390235       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:01:48.908930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:02:18.396484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:02:18.916893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:02:25.849003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="238.705µs"
	I0722 12:02:37.848162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="130.818µs"
	E0722 12:02:48.402158       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:02:48.924453       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:03:18.407559       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:03:18.932893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:03:48.412767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:03:48.941048       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:18.417743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:04:18.950248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:48.422527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:04:48.960089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:05:18.429598       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:05:18.967969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff] <==
	I0722 11:56:19.863746       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:56:19.878603       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.113"]
	I0722 11:56:20.001968       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:56:20.002009       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:56:20.002024       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:56:20.011520       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:56:20.011735       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:56:20.011747       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:56:20.022738       1 config.go:192] "Starting service config controller"
	I0722 11:56:20.022781       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:56:20.022873       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:56:20.022894       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:56:20.026207       1 config.go:319] "Starting node config controller"
	I0722 11:56:20.026315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:56:20.123910       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:56:20.123925       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:56:20.126884       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72] <==
	W0722 11:56:02.282790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 11:56:02.282819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 11:56:02.282872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:56:02.282898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:56:02.282937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:56:02.282991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:56:02.283253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:56:02.283320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 11:56:03.106116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.106144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.116604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:56:03.116685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:56:03.207628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.207715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.219092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:56:03.219173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:56:03.425592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 11:56:03.425724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 11:56:03.445529       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:56:03.445619       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:56:03.473796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.473895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.539453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:56:03.540001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0722 11:56:06.572045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:03:04 embed-certs-802149 kubelet[3886]: E0722 12:03:04.852077    3886 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:03:04 embed-certs-802149 kubelet[3886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:03:04 embed-certs-802149 kubelet[3886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:03:04 embed-certs-802149 kubelet[3886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:03:04 embed-certs-802149 kubelet[3886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:03:17 embed-certs-802149 kubelet[3886]: E0722 12:03:17.833214    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:03:32 embed-certs-802149 kubelet[3886]: E0722 12:03:32.832805    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:03:43 embed-certs-802149 kubelet[3886]: E0722 12:03:43.832392    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:03:54 embed-certs-802149 kubelet[3886]: E0722 12:03:54.832304    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:04:04 embed-certs-802149 kubelet[3886]: E0722 12:04:04.854861    3886 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:04:04 embed-certs-802149 kubelet[3886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:04:04 embed-certs-802149 kubelet[3886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:04:04 embed-certs-802149 kubelet[3886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:04:04 embed-certs-802149 kubelet[3886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:04:06 embed-certs-802149 kubelet[3886]: E0722 12:04:06.833017    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:04:21 embed-certs-802149 kubelet[3886]: E0722 12:04:21.833169    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:04:35 embed-certs-802149 kubelet[3886]: E0722 12:04:35.832884    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:04:49 embed-certs-802149 kubelet[3886]: E0722 12:04:49.832602    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:05:02 embed-certs-802149 kubelet[3886]: E0722 12:05:02.834153    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:05:04 embed-certs-802149 kubelet[3886]: E0722 12:05:04.857187    3886 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:05:04 embed-certs-802149 kubelet[3886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:05:04 embed-certs-802149 kubelet[3886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:05:04 embed-certs-802149 kubelet[3886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:05:04 embed-certs-802149 kubelet[3886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:05:14 embed-certs-802149 kubelet[3886]: E0722 12:05:14.833487    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	
	
	==> storage-provisioner [d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401] <==
	I0722 11:56:20.956677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:56:20.967130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:56:20.967480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:56:20.979502       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:56:20.981950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bba87dd7-5cc0-41de-9f7c-2def2a497698", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053 became leader
	I0722 11:56:20.982038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053!
	I0722 11:56:21.084357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-802149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-88d4n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n: exit status 1 (59.717461ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-88d4n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:06:01.627297167 +0000 UTC m=+5832.354711507
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-605740 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-605740 logs -n 25: (2.009247374s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.075135102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649963075101428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=380bcb1c-aa6f-4e98-8342-f227725e159e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.076694738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=020868f3-11a1-411b-8e52-67ba0f0f5620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.076790611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=020868f3-11a1-411b-8e52-67ba0f0f5620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.077043179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=020868f3-11a1-411b-8e52-67ba0f0f5620 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.119094882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb9b9513-288e-472a-a2f4-9e36b586f70c name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.119184783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb9b9513-288e-472a-a2f4-9e36b586f70c name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.120793702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2b9ec12-db46-4d9e-b2c3-d23dc7551e1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.121588498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649963121559907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2b9ec12-db46-4d9e-b2c3-d23dc7551e1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.121957230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3f27d14-a99e-4953-b6a7-702b2926c351 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.122038364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3f27d14-a99e-4953-b6a7-702b2926c351 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.122216631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3f27d14-a99e-4953-b6a7-702b2926c351 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.159238988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=775bf79a-7bc4-4b66-be59-a30ecb33eb03 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.159332800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=775bf79a-7bc4-4b66-be59-a30ecb33eb03 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.160569374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc785937-d614-42d8-8ff2-76120c14ac5e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.160951756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649963160931489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc785937-d614-42d8-8ff2-76120c14ac5e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.161795600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab393419-e2fe-4e74-9ed0-8a4527b50dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.161871016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab393419-e2fe-4e74-9ed0-8a4527b50dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.162050885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab393419-e2fe-4e74-9ed0-8a4527b50dd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.193296919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57df5f4c-0e00-40bc-82ae-ceaf29cb366e name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.193379147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57df5f4c-0e00-40bc-82ae-ceaf29cb366e name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.194629462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d631f56c-4218-4165-88e8-5e4d1a8b125c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.195092381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649963195072877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d631f56c-4218-4165-88e8-5e4d1a8b125c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.195841929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=463a96e1-9a31-4ded-9a40-c237f21cdced name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.195911421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=463a96e1-9a31-4ded-9a40-c237f21cdced name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:03 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:06:03.196080197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=463a96e1-9a31-4ded-9a40-c237f21cdced name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d62ba1b30907       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   51b648598da70       storage-provisioner
	5b8aefb11b4f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a842394945019       coredns-7db6d8ff4d-nlfgl
	432537d466c8b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3ce3eb6559981       coredns-7db6d8ff4d-tnnxf
	c615bf54ba394       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   d845155770c78       kube-proxy-58qcp
	2a96c39f4a48d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   55f657b008814       etcd-default-k8s-diff-port-605740
	9561f587825f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   e0a43c4765d9a       kube-scheduler-default-k8s-diff-port-605740
	52c792eb6ba9b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   db6387d803915       kube-controller-manager-default-k8s-diff-port-605740
	ce42664a9cd36       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   7e247212480d9       kube-apiserver-default-k8s-diff-port-605740
	
	
	==> coredns [432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-605740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-605740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=default-k8s-diff-port-605740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-605740
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:02:09 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:02:09 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:02:09 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:02:09 +0000   Mon, 22 Jul 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    default-k8s-diff-port-605740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fff1d262e8904b2ca6da869b38918cfa
	  System UUID:                fff1d262-e890-4b2c-a6da-869b38918cfa
	  Boot ID:                    afc6903b-aa25-43a8-bb6a-9fb2f2fad052
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-nlfgl                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-tnnxf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-605740                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-605740             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-605740    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-58qcp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-605740             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-2xv7x                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-605740 event: Registered Node default-k8s-diff-port-605740 in Controller
	
	
	==> dmesg <==
	[  +0.042258] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.419009] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609954] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.202692] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.064059] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061316] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.213954] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.119542] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.315886] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.575435] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.066115] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.857451] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.604653] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.287302] kauditd_printk_skb: 50 callbacks suppressed
	[Jul22 11:52] kauditd_printk_skb: 27 callbacks suppressed
	[Jul22 11:56] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.796098] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +4.643419] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.408020] systemd-fstab-generator[3910]: Ignoring "noauto" option for root device
	[ +13.908306] systemd-fstab-generator[4105]: Ignoring "noauto" option for root device
	[  +0.097831] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:58] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2] <==
	{"level":"info","ts":"2024-07-22T11:56:39.386819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=(12310432666106675562)"}
	{"level":"info","ts":"2024-07-22T11:56:39.386975Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","added-peer-id":"aad771494ea7416a","added-peer-peer-urls":["https://192.168.39.87:2380"]}
	{"level":"info","ts":"2024-07-22T11:56:39.392224Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:56:39.394798Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:56:39.392666Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-22T11:56:39.39767Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-22T11:56:39.398116Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:56:39.617636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:39.617775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:39.617851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 1"}
	{"level":"info","ts":"2024-07-22T11:56:39.617892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:39.619575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:39.619675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:39.619707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:39.622826Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625596Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625339Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:default-k8s-diff-port-605740 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:56:39.625354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:39.625363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:39.629604Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:39.629891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:39.629916Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.642092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:56:39.677464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	
	
	==> kernel <==
	 12:06:03 up 14 min,  0 users,  load average: 0.02, 0.22, 0.21
	Linux default-k8s-diff-port-605740 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87] <==
	I0722 11:59:59.813714       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:01:41.588104       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:01:41.588226       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 12:01:42.588905       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:01:42.589002       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:01:42.589009       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:01:42.589100       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:01:42.589155       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:01:42.590317       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:02:42.589723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:02:42.589970       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:02:42.590008       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:02:42.590805       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:02:42.590840       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:02:42.592016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:04:42.590939       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:04:42.591272       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:04:42.591312       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:04:42.592395       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:04:42.592474       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:04:42.592504       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6] <==
	I0722 12:00:27.541754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:00:57.001236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:00:57.550019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:01:27.006688       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:01:27.558295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:01:57.012609       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:01:57.566205       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:02:27.017960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:02:27.574880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:02:47.029268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="829.621µs"
	E0722 12:02:57.023740       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:02:57.583213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:03:02.028713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="148.08µs"
	E0722 12:03:27.029982       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:03:27.595093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:03:57.035579       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:03:57.603215       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:27.040854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:04:27.611129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:57.046707       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:04:57.619332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:05:27.055066       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:05:27.627261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:05:57.060039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:05:57.634888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012] <==
	I0722 11:56:59.568697       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:56:59.591062       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	I0722 11:56:59.675933       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:56:59.676101       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:56:59.676181       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:56:59.678791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:56:59.679000       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:56:59.679217       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:56:59.680897       1 config.go:192] "Starting service config controller"
	I0722 11:56:59.680974       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:56:59.681022       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:56:59.681039       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:56:59.681661       1 config.go:319] "Starting node config controller"
	I0722 11:56:59.682707       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:56:59.781711       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:56:59.781751       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:56:59.783191       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185] <==
	E0722 11:56:41.614010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:56:41.614016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 11:56:41.614287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:56:42.520833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:56:42.520890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 11:56:42.545114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:56:42.545165       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:56:42.580158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:56:42.580238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:56:42.589201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:56:42.589386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:56:42.600613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:42.600807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:42.620718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:42.620787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:42.685124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:56:42.685228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:56:42.742327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:56:42.743494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 11:56:42.864333       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:56:42.865599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 11:56:45.890047       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:03:44 default-k8s-diff-port-605740 kubelet[3917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:03:44 default-k8s-diff-port-605740 kubelet[3917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:03:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:03:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:03:56 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:03:56.011974    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:04:07 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:07.011726    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:04:20 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:20.011483    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:04:34 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:34.013880    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:04:44 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:44.027951    3917 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:04:44 default-k8s-diff-port-605740 kubelet[3917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:04:44 default-k8s-diff-port-605740 kubelet[3917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:04:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:04:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:04:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:47.012589    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:04:58 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:04:58.013405    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:05:11 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:05:11.012421    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:05:25 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:05:25.013072    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:05:36 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:05:36.021835    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:05:44 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:05:44.027329    3917 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:05:44 default-k8s-diff-port-605740 kubelet[3917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:05:44 default-k8s-diff-port-605740 kubelet[3917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:05:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:05:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:05:48 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:05:48.011657    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:06:00 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:06:00.011490    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	
	
	==> storage-provisioner [5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf] <==
	I0722 11:56:59.627682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:56:59.636888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:56:59.637111       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:56:59.645866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:56:59.647207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0!
	I0722 11:56:59.650962       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef8bd77f-53d4-42a0-8994-dfd3795ed32f", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0 became leader
	I0722 11:56:59.748921       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2xv7x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x: exit status 1 (59.780116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2xv7x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0722 11:58:29.087278   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-339929 -n no-preload-339929
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:06:25.979252444 +0000 UTC m=+5856.706666795
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-339929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-339929 logs -n 25: (2.046850823s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.419857570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649987419834851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=612bfff0-65ca-4664-97b8-e85884da9159 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.420404107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3814f0a9-4923-4aae-a504-8976736d53c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.420472067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3814f0a9-4923-4aae-a504-8976736d53c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.420763966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3814f0a9-4923-4aae-a504-8976736d53c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.459476918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cb6d3e4-0deb-41f0-b145-0cd10b10656b name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.459775862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cb6d3e4-0deb-41f0-b145-0cd10b10656b name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.461460308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50aae2c6-d7b4-4bab-9cc1-34f063ce0c36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.462304770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649987462282133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50aae2c6-d7b4-4bab-9cc1-34f063ce0c36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.463041403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8431890-a8ad-41ee-b0a7-4b55ae022476 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.463090950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8431890-a8ad-41ee-b0a7-4b55ae022476 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.463285625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8431890-a8ad-41ee-b0a7-4b55ae022476 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.499245611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61e8543c-8f70-4d6e-922c-2f0a498a26e0 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.499315952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61e8543c-8f70-4d6e-922c-2f0a498a26e0 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.500739371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99bd9be2-f86b-4347-bb38-b13907753068 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.501060833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649987501040101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99bd9be2-f86b-4347-bb38-b13907753068 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.501651812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f2ef25f-c6a3-4199-ad03-3be1c3763783 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.501711011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f2ef25f-c6a3-4199-ad03-3be1c3763783 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.501928126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f2ef25f-c6a3-4199-ad03-3be1c3763783 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.537734386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=070abd5e-5bc6-4c8a-b468-69031b664b78 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.537804058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=070abd5e-5bc6-4c8a-b468-69031b664b78 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.538671849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdf0df35-b142-411c-b3af-e73acc74da54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.538991224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721649987538969798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdf0df35-b142-411c-b3af-e73acc74da54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.539399107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6008c70b-ff7d-4251-8a91-b6c63285bde2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.539446053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6008c70b-ff7d-4251-8a91-b6c63285bde2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:06:27 no-preload-339929 crio[740]: time="2024-07-22 12:06:27.539743604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6008c70b-ff7d-4251-8a91-b6c63285bde2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1a39adf6b9e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ae585ea000cb2       coredns-5cfdc65f69-xxf6t
	376a436fd8b89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8d80b05b44b97       coredns-5cfdc65f69-vg4wp
	dba8b852f9942       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c390506aa48d8       storage-provisioner
	ad6a274fac983       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   70d9710be7f5c       kube-proxy-b5xwg
	af66c67a58cf0       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   c7d3ab04f4ed9       kube-scheduler-no-preload-339929
	b8434d25d9dec       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   02e56cd7d1111       etcd-no-preload-339929
	8e15950675152       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   60c7105c285b6       kube-controller-manager-no-preload-339929
	84a05a56db34e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   06b0c7bfff953       kube-apiserver-no-preload-339929
	247e869804e35       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   6b62e6b7f0e48       kube-apiserver-no-preload-339929
	
	
	==> coredns [376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-339929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-339929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=no-preload-339929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:57:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-339929
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:02:26 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:02:26 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:02:26 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:02:26 +0000   Mon, 22 Jul 2024 11:57:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.112
	  Hostname:    no-preload-339929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4276fd8212f54a07afb517aee0ecb30d
	  System UUID:                4276fd82-12f5-4a07-afb5-17aee0ecb30d
	  Boot ID:                    dc98608e-1eaf-4e96-a621-04b1c3b629ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-vg4wp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-5cfdc65f69-xxf6t                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-339929                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-no-preload-339929             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-no-preload-339929    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-b5xwg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-scheduler-no-preload-339929             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-78fcd8795b-9vzx2              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node no-preload-339929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node no-preload-339929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node no-preload-339929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-339929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-339929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-339929 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node no-preload-339929 event: Registered Node no-preload-339929 in Controller
	
	
	==> dmesg <==
	[  +0.052332] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041508] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634492] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837731] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.071038] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066485] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.175944] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.137635] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +0.313918] systemd-fstab-generator[726]: Ignoring "noauto" option for root device
	[Jul22 11:52] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.061140] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.669404] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +4.624692] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.802160] kauditd_printk_skb: 90 callbacks suppressed
	[Jul22 11:57] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +0.064034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.174751] kauditd_printk_skb: 52 callbacks suppressed
	[  +1.815252] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +5.405201] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.509533] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +4.632777] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223] <==
	{"level":"info","ts":"2024-07-22T11:57:04.749104Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:57:04.749365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 switched to configuration voters=(15504021045550610566)"}
	{"level":"info","ts":"2024-07-22T11:57:04.749522Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f21c4f9090188b3d","local-member-id":"d72958ff42397886","added-peer-id":"d72958ff42397886","added-peer-peer-urls":["https://192.168.61.112:2380"]}
	{"level":"info","ts":"2024-07-22T11:57:04.749645Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.112:2380"}
	{"level":"info","ts":"2024-07-22T11:57:04.749669Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.112:2380"}
	{"level":"info","ts":"2024-07-22T11:57:05.413607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.413669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.413695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 received MsgPreVoteResp from d72958ff42397886 at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.41371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 received MsgVoteResp from d72958ff42397886 at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d72958ff42397886 elected leader d72958ff42397886 at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.417747Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d72958ff42397886","local-member-attributes":"{Name:no-preload-339929 ClientURLs:[https://192.168.61.112:2379]}","request-path":"/0/members/d72958ff42397886/attributes","cluster-id":"f21c4f9090188b3d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:57:05.417917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:57:05.418013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:57:05.418426Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.421647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:57:05.425364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.112:2379"}
	{"level":"info","ts":"2024-07-22T11:57:05.425515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:57:05.425631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:57:05.426149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:57:05.429182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f21c4f9090188b3d","local-member-id":"d72958ff42397886","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.429386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.429428Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.433848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:06:27 up 14 min,  0 users,  load average: 0.06, 0.12, 0.09
	Linux no-preload-339929 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98] <==
	W0722 11:56:56.420769       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.420806       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.455761       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.489505       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.492189       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.507820       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.602964       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.613506       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.649080       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.684276       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.731347       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.763728       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.786248       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.906648       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.909246       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.910613       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.045916       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.217026       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.482248       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.504011       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.649835       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.094965       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.239805       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.244276       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.250178       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 12:02:08.168664       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:02:08.168798       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 12:02:08.169910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:02:08.169942       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:03:08.170835       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:03:08.170993       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 12:03:08.171085       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:03:08.171122       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 12:03:08.172154       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:03:08.172171       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:05:08.172450       1 handler_proxy.go:99] no RequestInfo found in the context
	W0722 12:05:08.172719       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:05:08.172976       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0722 12:05:08.173004       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0722 12:05:08.174188       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:05:08.174261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951] <==
	E0722 12:01:15.239241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:01:15.266088       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:01:45.245402       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:01:45.274244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:02:15.252338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:02:15.282469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:02:26.748649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-339929"
	E0722 12:02:45.257971       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:02:45.290872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:03:15.266912       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:03:15.298173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:03:26.892362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="332.38µs"
	I0722 12:03:38.891018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="100.465µs"
	E0722 12:03:45.275227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:03:45.306683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:15.282428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:04:15.314422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:04:45.289688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:04:45.322887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:05:15.296325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:05:15.330679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:05:45.303033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:05:45.337472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:06:15.309468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:06:15.345360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 11:57:15.155860       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 11:57:15.167115       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.112"]
	E0722 11:57:15.167189       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 11:57:15.201695       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 11:57:15.201742       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:57:15.201775       1 server_linux.go:170] "Using iptables Proxier"
	I0722 11:57:15.204199       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 11:57:15.204456       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 11:57:15.204482       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:57:15.205873       1 config.go:197] "Starting service config controller"
	I0722 11:57:15.205904       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:57:15.205925       1 config.go:104] "Starting endpoint slice config controller"
	I0722 11:57:15.205929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:57:15.206491       1 config.go:326] "Starting node config controller"
	I0722 11:57:15.206628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:57:15.307643       1 shared_informer.go:320] Caches are synced for node config
	I0722 11:57:15.307673       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:57:15.307693       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa] <==
	W0722 11:57:07.242179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:57:07.242206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.242221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 11:57:07.242228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.242256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:57:07.242264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.252162       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:57:07.252217       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 11:57:08.082868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 11:57:08.082989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.117756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:57:08.117859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.252428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:57:08.252522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.263365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:57:08.263455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.378150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:57:08.378249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.389071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:57:08.389176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.403983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:57:08.404090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.445229       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:57:08.445452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0722 11:57:11.381119       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:04:09 no-preload-339929 kubelet[3291]: E0722 12:04:09.899262    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:04:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:04:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:04:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:04:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:04:18 no-preload-339929 kubelet[3291]: E0722 12:04:18.869747    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:04:32 no-preload-339929 kubelet[3291]: E0722 12:04:32.871001    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:04:45 no-preload-339929 kubelet[3291]: E0722 12:04:45.870894    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:04:56 no-preload-339929 kubelet[3291]: E0722 12:04:56.870018    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:05:09 no-preload-339929 kubelet[3291]: E0722 12:05:09.900018    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:05:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:05:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:05:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:05:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:05:10 no-preload-339929 kubelet[3291]: E0722 12:05:10.869971    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:05:25 no-preload-339929 kubelet[3291]: E0722 12:05:25.871387    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:05:40 no-preload-339929 kubelet[3291]: E0722 12:05:40.870331    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:05:53 no-preload-339929 kubelet[3291]: E0722 12:05:53.872602    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:06:04 no-preload-339929 kubelet[3291]: E0722 12:06:04.870442    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:06:09 no-preload-339929 kubelet[3291]: E0722 12:06:09.898148    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:06:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:06:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:06:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:06:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:06:15 no-preload-339929 kubelet[3291]: E0722 12:06:15.870218    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	
	
	==> storage-provisioner [dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757] <==
	I0722 11:57:16.955678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:57:16.997648       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:57:16.997903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:57:17.018926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:57:17.019718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86bd175a-a12f-46c6-806b-7eb3378e0317", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5 became leader
	I0722 11:57:17.019777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5!
	I0722 11:57:17.122753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-339929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-9vzx2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2: exit status 1 (60.384044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-9vzx2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0722 12:01:36.610644   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0722 12:03:29.087957   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0722 12:04:39.660503   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0722 12:06:36.611396   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (222.476924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-101261" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (222.069282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25: (1.480175661s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.265198002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650107265170410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7b2eb7a-0e10-400b-b553-7e5ff6695013 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.265824644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a1acd9-92ae-4592-b550-478fd4bd889c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.265903466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a1acd9-92ae-4592-b550-478fd4bd889c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.265954951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7a1acd9-92ae-4592-b550-478fd4bd889c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.297344847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12a363a8-90c4-4f09-b9e5-7ca2c0fe8418 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.297468817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12a363a8-90c4-4f09-b9e5-7ca2c0fe8418 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.298375413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=005c2073-237f-49c7-b548-3ef169ec17c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.298895571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650107298866658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=005c2073-237f-49c7-b548-3ef169ec17c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.299489893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9955915d-fafe-4a64-9d79-84ba725aa3fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.299568343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9955915d-fafe-4a64-9d79-84ba725aa3fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.299606815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9955915d-fafe-4a64-9d79-84ba725aa3fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.329172444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb8c86f7-91bd-45f4-ac5a-b83063be03ee name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.329279375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb8c86f7-91bd-45f4-ac5a-b83063be03ee name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.330359104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=326cedd9-6234-4a31-bc58-dfcf4bc8e593 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.330848233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650107330819849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=326cedd9-6234-4a31-bc58-dfcf4bc8e593 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.331478898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3152e81d-59f8-432a-93bb-2c09adf1efe7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.331544386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3152e81d-59f8-432a-93bb-2c09adf1efe7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.331596412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3152e81d-59f8-432a-93bb-2c09adf1efe7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.363597744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d8318d2-34b9-4c17-a6ce-639eda061c30 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.363716232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d8318d2-34b9-4c17-a6ce-639eda061c30 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.365130300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eeb9798a-f04f-402f-9a6b-e18391fe9cdb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.365689354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650107365666200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eeb9798a-f04f-402f-9a6b-e18391fe9cdb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.366367362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b52615f6-f2d4-436b-9f04-127c95a54467 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.366515775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b52615f6-f2d4-436b-9f04-127c95a54467 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:08:27 old-k8s-version-101261 crio[646]: time="2024-07-22 12:08:27.366571811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b52615f6-f2d4-436b-9f04-127c95a54467 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050630] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664885] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.301657] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.299545] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.059053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064893] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.225240] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.133946] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.249574] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +5.972877] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.060881] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.615774] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +12.639328] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 11:55] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Jul22 11:57] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.065899] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:08:27 up 17 min,  0 users,  load average: 0.05, 0.07, 0.05
	Linux old-k8s-version-101261 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/sync/once.go:66 +0xec
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: sync.(*Once).Do(...)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/sync/once.go:57
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: net.systemConf(...)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/net/conf.go:42
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: net.(*Resolver).lookupIP(0x70c5740, 0x4f7fdc0, 0xc0003ef7c0, 0x48ab5d6, 0x3, 0xc000a454d0, 0x1f, 0x8, 0xe, 0x0, ...)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/net/lookup_unix.go:94 +0x205
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: net.glob..func1(0x4f7fdc0, 0xc0003ef7c0, 0xc000b94670, 0x48ab5d6, 0x3, 0xc000a454d0, 0x1f, 0x0, 0x0, 0xc0002df080, ...)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/net/hook.go:23 +0x72
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/net/lookup.go:293 +0xb9
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc0009a6fa0, 0xc000a45590, 0x23, 0xc0003ef800)
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]: created by internal/singleflight.(*Group).DoChan
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6469]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jul 22 12:08:22 old-k8s-version-101261 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 22 12:08:22 old-k8s-version-101261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 22 12:08:22 old-k8s-version-101261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 22 12:08:22 old-k8s-version-101261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 12:08:22 old-k8s-version-101261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6477]: I0722 12:08:22.778478    6477 server.go:416] Version: v1.20.0
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6477]: I0722 12:08:22.778751    6477 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6477]: I0722 12:08:22.780660    6477 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6477]: W0722 12:08:22.781493    6477 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 22 12:08:22 old-k8s-version-101261 kubelet[6477]: I0722 12:08:22.781810    6477 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (230.516579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-101261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (498.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-802149 -n embed-certs-802149
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:13:44.179431725 +0000 UTC m=+6294.906846061
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-802149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-802149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.248µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-802149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-802149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-802149 logs -n 25: (1.2566375s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo journalctl                       | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo docker                           | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo                                  | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo cat                              | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo containerd                       | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo systemctl                        | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo find                             | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-511820 sudo crio                             | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-511820                                       | auto-511820   | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC | 22 Jul 24 12:13 UTC |
	| start   | -p calico-511820 --memory=3072                       | calico-511820 | jenkins | v1.33.1 | 22 Jul 24 12:13 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 12:13:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 12:13:44.274272   69537 out.go:291] Setting OutFile to fd 1 ...
	I0722 12:13:44.274371   69537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:13:44.274377   69537 out.go:304] Setting ErrFile to fd 2...
	I0722 12:13:44.274384   69537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:13:44.274588   69537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 12:13:44.275196   69537 out.go:298] Setting JSON to false
	I0722 12:13:44.276331   69537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6976,"bootTime":1721643448,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 12:13:44.276407   69537 start.go:139] virtualization: kvm guest
	I0722 12:13:44.278919   69537 out.go:177] * [calico-511820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 12:13:44.280418   69537 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 12:13:44.280449   69537 notify.go:220] Checking for updates...
	I0722 12:13:44.286630   69537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 12:13:44.287812   69537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 12:13:44.289012   69537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:13:44.290242   69537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 12:13:44.291431   69537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 12:13:44.293227   69537 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:13:44.293362   69537 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:13:44.293493   69537 config.go:182] Loaded profile config "kindnet-511820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:13:44.293603   69537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 12:13:44.331711   69537 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 12:13:44.333137   69537 start.go:297] selected driver: kvm2
	I0722 12:13:44.333156   69537 start.go:901] validating driver "kvm2" against <nil>
	I0722 12:13:44.333168   69537 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 12:13:44.333980   69537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:13:44.334053   69537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 12:13:44.349186   69537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 12:13:44.349239   69537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 12:13:44.349455   69537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 12:13:44.349504   69537 cni.go:84] Creating CNI manager for "calico"
	I0722 12:13:44.349511   69537 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0722 12:13:44.349556   69537 start.go:340] cluster config:
	{Name:calico-511820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 12:13:44.349647   69537 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:13:44.351245   69537 out.go:177] * Starting "calico-511820" primary control-plane node in "calico-511820" cluster
	I0722 12:13:44.352303   69537 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 12:13:44.352324   69537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 12:13:44.352330   69537 cache.go:56] Caching tarball of preloaded images
	I0722 12:13:44.352396   69537 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 12:13:44.352410   69537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 12:13:44.352505   69537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/calico-511820/config.json ...
	I0722 12:13:44.352535   69537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/calico-511820/config.json: {Name:mk8f6b6e400ea8bb7c77461979131488ada8015e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:13:44.352669   69537 start.go:360] acquireMachinesLock for calico-511820: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 12:13:44.352703   69537 start.go:364] duration metric: took 19.704µs to acquireMachinesLock for "calico-511820"
	I0722 12:13:44.352723   69537 start.go:93] Provisioning new machine with config: &{Name:calico-511820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 12:13:44.352792   69537 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.834788315Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=de86a297-1709-4424-8ec8-eafc90c5ee73 name=/runtime.v1.ImageService/ImageStatus
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.835935265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5f1c4ae-8d09-490d-9391-88e40a5d7ea4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.835987584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5f1c4ae-8d09-490d-9391-88e40a5d7ea4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.836197531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5f1c4ae-8d09-490d-9391-88e40a5d7ea4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.842375765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e00c953-4f7f-476f-b7c3-0b97fa416c14 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.842430320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e00c953-4f7f-476f-b7c3-0b97fa416c14 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.843983546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f50c3794-275e-45dc-b640-dd03b8ff0777 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.844745842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650424844560451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f50c3794-275e-45dc-b640-dd03b8ff0777 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.845380052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c448f1e-8b86-4809-a73e-ffac47501ae9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.845443303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c448f1e-8b86-4809-a73e-ffac47501ae9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.845686577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c448f1e-8b86-4809-a73e-ffac47501ae9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.891596893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e072c46e-e24e-44d7-9611-8ae492686ffa name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.891685179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e072c46e-e24e-44d7-9611-8ae492686ffa name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.893046696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a952681-5a7d-425a-b4ee-3714ce05d6e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.893633522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650424893601899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a952681-5a7d-425a-b4ee-3714ce05d6e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.894387800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=858886d2-c3a0-4888-b6bc-bc2b3ec3c5ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.894490212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=858886d2-c3a0-4888-b6bc-bc2b3ec3c5ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.894690775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=858886d2-c3a0-4888-b6bc-bc2b3ec3c5ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.935157097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97be8ce8-bf8b-4cbf-a820-b338178c0ea4 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.935231209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97be8ce8-bf8b-4cbf-a820-b338178c0ea4 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.936794298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b394fa2-14e2-4c93-8c8d-22f76c46c4a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.937208438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650424937185781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b394fa2-14e2-4c93-8c8d-22f76c46c4a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.937996605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8348c15-75bf-4b79-9bc8-7319778ac02d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.938066191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8348c15-75bf-4b79-9bc8-7319778ac02d name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:13:44 embed-certs-802149 crio[722]: time="2024-07-22 12:13:44.938323588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d,PodSandboxId:90478a6390f86f2b8ac6306678e7a77ebcc1ef5ac410b81e2597b14acd8c863a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381413569476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c2dkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82a689e-5a99-4889-808f-3e1e199323d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9e1a99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f,PodSandboxId:a55c29b4325026534e80c4cfffd8fbf41556ecb0f71423283a27c387a2adbf3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649381336000056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kz8d9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26d2d65c-aa13-4d94-b091-bf674fee0185,},Annotations:map[string]string{io.kubernetes.container.hash: c0e510d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401,PodSandboxId:a4b1d613d74c78350d911daa369c5f881e292b7f64caf3dbd1e4d0e5131e1fa3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1721649380835972007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68fcb5f-42b5-408e-9c10-d86b14a1b993,},Annotations:map[string]string{io.kubernetes.container.hash: eb1195fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff,PodSandboxId:01187af5ae6efbddd297e5d7aea2255c17ee3a225816545bc7c80ab8bff072bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721649379613468207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w89tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4d3074-e552-4c7b-ba0f-f57a3b80f529,},Annotations:map[string]string{io.kubernetes.container.hash: 15644981,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72,PodSandboxId:a104b9d6402861593a1cdceffea6985a08b2887a04adffc4d58ec29a329949e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649359664000130,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a0c5c6edb5d883fef231826f1af424,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22,PodSandboxId:da9b9182195ccfb38e749ddd2bbc778f38ac355a56a00f33a015258ef05c4348,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649359589477657,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0e917324a04ef903bd31a4455cde51,},Annotations:map[string]string{io.kubernetes.container.hash: a1ab7ed6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b,PodSandboxId:3de43b37dd3d9eace181f1f266ea1854c3889cedfeaccaafbf8ae6c153086193,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649359551580251,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250d0e5099efb6fe8f3658a0f7969bf8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba,PodSandboxId:8181b461637c5a76f87d872a99e1914575ce9816f3aa23a11de015e7ffbe8dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649359533561448,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-802149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e9cf4d7e5880d210e342ba58db90aa,},Annotations:map[string]string{io.kubernetes.container.hash: f7930bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8348c15-75bf-4b79-9bc8-7319778ac02d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fbcf2083c04a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   90478a6390f86       coredns-7db6d8ff4d-c2dkr
	43f8eb6548b82       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   a55c29b432502       coredns-7db6d8ff4d-kz8d9
	d24677b58b615       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   a4b1d613d74c7       storage-provisioner
	4d8b4ca43b70a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 minutes ago      Running             kube-proxy                0                   01187af5ae6ef       kube-proxy-w89tg
	68eba96f7ba02       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   17 minutes ago      Running             kube-scheduler            2                   a104b9d640286       kube-scheduler-embed-certs-802149
	10e2be9e61df1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   da9b9182195cc       etcd-embed-certs-802149
	f1f94464010f7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   17 minutes ago      Running             kube-controller-manager   2                   3de43b37dd3d9       kube-controller-manager-embed-certs-802149
	7ff4c87c40e71       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   17 minutes ago      Running             kube-apiserver            2                   8181b461637c5       kube-apiserver-embed-certs-802149
	
	
	==> coredns [43f8eb6548b82cbcc6eaa6343a29f7ef5a5da15fd9bfffca726f89f1615ab31f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fbcf2083c04a1d408071b31332ff8b549e73f32f30db045e27cd1eac387c2d6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-802149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-802149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=embed-certs-802149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:56:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-802149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:11:44 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:11:44 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:11:44 +0000   Mon, 22 Jul 2024 11:56:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:11:44 +0000   Mon, 22 Jul 2024 11:56:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.113
	  Hostname:    embed-certs-802149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8766530bf8c84d62a77555a63c00c03f
	  System UUID:                8766530b-f8c8-4d62-a775-55a63c00c03f
	  Boot ID:                    d82689a1-9245-4021-98d4-b2fe0c418ca5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-c2dkr                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-kz8d9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-embed-certs-802149                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-802149             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-802149    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-w89tg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-embed-certs-802149             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-569cc877fc-88d4n               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-802149 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-802149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node embed-certs-802149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node embed-certs-802149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node embed-certs-802149 event: Registered Node embed-certs-802149 in Controller
	
	
	==> dmesg <==
	[  +0.049789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040325] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.479305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.146525] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.066304] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.061388] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067153] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.219458] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.117690] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.284661] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul22 11:51] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.064511] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.850546] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.639504] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.560214] kauditd_printk_skb: 84 callbacks suppressed
	[Jul22 11:55] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.763530] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[Jul22 11:56] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.595929] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +14.862534] systemd-fstab-generator[4087]: Ignoring "noauto" option for root device
	[  +0.106761] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:57] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [10e2be9e61df1a6b265892a736caa6f00fe08ab062efb4dcea99977bbc982a22] <==
	{"level":"info","ts":"2024-07-22T11:56:00.330951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 received MsgVoteResp from 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.331033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.331063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6bf3317fd0e8dc60 elected leader 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-07-22T11:56:00.335784Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6bf3317fd0e8dc60","local-member-attributes":"{Name:embed-certs-802149 ClientURLs:[https://192.168.72.113:2379]}","request-path":"/0/members/6bf3317fd0e8dc60/attributes","cluster-id":"19cf5c6a1483664a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:56:00.337317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:00.337766Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.339451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:00.342908Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.113:2379"}
	{"level":"info","ts":"2024-07-22T11:56:00.355417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"19cf5c6a1483664a","local-member-id":"6bf3317fd0e8dc60","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.355512Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.355552Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:00.356863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:56:00.357157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:00.382329Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T12:06:00.547321Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-07-22T12:06:00.556436Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":681,"took":"8.459376ms","hash":3878393727,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-22T12:06:00.556526Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3878393727,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-07-22T12:11:00.554814Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2024-07-22T12:11:00.55904Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":924,"took":"3.859435ms","hash":940129388,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-22T12:11:00.559112Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":940129388,"revision":924,"compact-revision":681}
	{"level":"info","ts":"2024-07-22T12:12:11.410805Z","caller":"traceutil/trace.go:171","msg":"trace[929246317] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"195.301549ms","start":"2024-07-22T12:12:11.21541Z","end":"2024-07-22T12:12:11.410712Z","steps":["trace[929246317] 'process raft request'  (duration: 195.173457ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:12:38.05578Z","caller":"traceutil/trace.go:171","msg":"trace[1203647412] linearizableReadLoop","detail":"{readStateIndex:1461; appliedIndex:1460; }","duration":"124.698098ms","start":"2024-07-22T12:12:37.931053Z","end":"2024-07-22T12:12:38.055751Z","steps":["trace[1203647412] 'read index received'  (duration: 124.370771ms)","trace[1203647412] 'applied index is now lower than readState.Index'  (duration: 326.677µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T12:12:38.056414Z","caller":"traceutil/trace.go:171","msg":"trace[1885193549] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"192.515029ms","start":"2024-07-22T12:12:37.863887Z","end":"2024-07-22T12:12:38.056402Z","steps":["trace[1885193549] 'process raft request'  (duration: 191.580461ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T12:12:38.057491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.354561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T12:12:38.059354Z","caller":"traceutil/trace.go:171","msg":"trace[233924514] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1248; }","duration":"128.299414ms","start":"2024-07-22T12:12:37.931028Z","end":"2024-07-22T12:12:38.059327Z","steps":["trace[233924514] 'agreement among raft nodes before linearized reading'  (duration: 126.344981ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:13:45 up 23 min,  0 users,  load average: 0.06, 0.10, 0.09
	Linux embed-certs-802149 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7ff4c87c40e71e9186221dc05159ba215daa5ebe898cb9d01fa52528238f74ba] <==
	I0722 12:07:03.273221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:09:03.272186       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:09:03.272252       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:09:03.272308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:09:03.273420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:09:03.273538       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:09:03.273569       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:11:02.274974       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:02.275078       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 12:11:03.275181       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:03.275362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:11:03.275395       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:11:03.275481       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:03.275529       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:11:03.276680       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:12:03.275600       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:12:03.275844       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:12:03.275893       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:12:03.276929       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:12:03.277008       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:12:03.277040       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f1f94464010f7593920e88163954f7bce0420bbe7cb0b46e496d562cf431599b] <==
	I0722 12:07:53.849039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.974µs"
	E0722 12:08:18.464062       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:08:19.014798       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:08:48.468891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:08:49.022466       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:09:18.475057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:09:19.030071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:09:48.480337       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:09:49.037864       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:18.486354       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:10:19.052032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:48.491968       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:10:49.063005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:11:18.497491       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:11:19.073149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:11:48.506737       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:11:49.083691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:12:18.512373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:12:19.098028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:12:39.853466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="459.06µs"
	E0722 12:12:48.517958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:12:49.109235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:12:53.847527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="274.391µs"
	E0722 12:13:18.524391       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:13:19.120480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4d8b4ca43b70a5fc2132a40265b3221663d6f04079d23a77c1bf87f074070dff] <==
	I0722 11:56:19.863746       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:56:19.878603       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.113"]
	I0722 11:56:20.001968       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:56:20.002009       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:56:20.002024       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:56:20.011520       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:56:20.011735       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:56:20.011747       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:56:20.022738       1 config.go:192] "Starting service config controller"
	I0722 11:56:20.022781       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:56:20.022873       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:56:20.022894       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:56:20.026207       1 config.go:319] "Starting node config controller"
	I0722 11:56:20.026315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:56:20.123910       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:56:20.123925       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:56:20.126884       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68eba96f7ba02a2f060e36221e9b63e3ae7cefa25dc8d2fe3bea95788b791e72] <==
	W0722 11:56:02.282790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 11:56:02.282819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 11:56:02.282872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:56:02.282898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:56:02.282937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:56:02.282991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:56:02.283253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:56:02.283320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 11:56:03.106116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.106144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.116604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:56:03.116685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:56:03.207628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.207715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.219092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:56:03.219173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:56:03.425592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 11:56:03.425724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 11:56:03.445529       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:56:03.445619       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 11:56:03.473796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:03.473895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:03.539453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:56:03.540001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0722 11:56:06.572045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:11:18 embed-certs-802149 kubelet[3886]: E0722 12:11:18.832907    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:11:32 embed-certs-802149 kubelet[3886]: E0722 12:11:32.836756    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:11:46 embed-certs-802149 kubelet[3886]: E0722 12:11:46.833009    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:12:01 embed-certs-802149 kubelet[3886]: E0722 12:12:01.832512    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:12:04 embed-certs-802149 kubelet[3886]: E0722 12:12:04.852708    3886 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:12:04 embed-certs-802149 kubelet[3886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:12:04 embed-certs-802149 kubelet[3886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:12:04 embed-certs-802149 kubelet[3886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:12:04 embed-certs-802149 kubelet[3886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:12:14 embed-certs-802149 kubelet[3886]: E0722 12:12:14.834805    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:12:28 embed-certs-802149 kubelet[3886]: E0722 12:12:28.847089    3886 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 12:12:28 embed-certs-802149 kubelet[3886]: E0722 12:12:28.847164    3886 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 12:12:28 embed-certs-802149 kubelet[3886]: E0722 12:12:28.847478    3886 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvccr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-88d4n_kube-system(b705d674-b431-4946-aa67-871d7d2f9e08): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 22 12:12:28 embed-certs-802149 kubelet[3886]: E0722 12:12:28.847537    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:12:39 embed-certs-802149 kubelet[3886]: E0722 12:12:39.833119    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:12:53 embed-certs-802149 kubelet[3886]: E0722 12:12:53.832624    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:13:04 embed-certs-802149 kubelet[3886]: E0722 12:13:04.851565    3886 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:13:04 embed-certs-802149 kubelet[3886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:13:04 embed-certs-802149 kubelet[3886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:13:04 embed-certs-802149 kubelet[3886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:13:04 embed-certs-802149 kubelet[3886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:13:05 embed-certs-802149 kubelet[3886]: E0722 12:13:05.832664    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:13:17 embed-certs-802149 kubelet[3886]: E0722 12:13:17.833046    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:13:31 embed-certs-802149 kubelet[3886]: E0722 12:13:31.832874    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	Jul 22 12:13:44 embed-certs-802149 kubelet[3886]: E0722 12:13:44.835187    3886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-88d4n" podUID="b705d674-b431-4946-aa67-871d7d2f9e08"
	
	
	==> storage-provisioner [d24677b58b615746e542a86e372d6e058377caeae7c3bad8e38e637e0a739401] <==
	I0722 11:56:20.956677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:56:20.967130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:56:20.967480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:56:20.979502       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:56:20.981950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bba87dd7-5cc0-41de-9f7c-2def2a497698", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053 became leader
	I0722 11:56:20.982038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053!
	I0722 11:56:21.084357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-802149_4fbf273f-c8be-49f7-8f6c-4340f0b6a053!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-802149 -n embed-certs-802149
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-802149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-88d4n
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n: exit status 1 (87.351747ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-88d4n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-802149 describe pod metrics-server-569cc877fc-88d4n: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (498.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (533.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:14:57.111009289 +0000 UTC m=+6367.838423629
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-605740 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.97µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-605740 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-605740 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-605740 logs -n 25: (1.266574922s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo docker                        | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo cat                           | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo                               | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo find                          | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-511820 sudo crio                          | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-511820                                    | kindnet-511820            | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC | 22 Jul 24 12:14 UTC |
	| start   | -p enable-default-cni-511820                         | enable-default-cni-511820 | jenkins | v1.33.1 | 22 Jul 24 12:14 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 12:14:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 12:14:40.938800   71664 out.go:291] Setting OutFile to fd 1 ...
	I0722 12:14:40.938927   71664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:14:40.938933   71664 out.go:304] Setting ErrFile to fd 2...
	I0722 12:14:40.938937   71664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:14:40.939158   71664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 12:14:40.940315   71664 out.go:298] Setting JSON to false
	I0722 12:14:40.941461   71664 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7033,"bootTime":1721643448,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 12:14:40.941528   71664 start.go:139] virtualization: kvm guest
	I0722 12:14:40.943054   71664 out.go:177] * [enable-default-cni-511820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 12:14:40.944545   71664 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 12:14:40.944663   71664 notify.go:220] Checking for updates...
	I0722 12:14:40.946898   71664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 12:14:40.948509   71664 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 12:14:40.952484   71664 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:14:40.953856   71664 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 12:14:40.955006   71664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 12:14:39.404226   69537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 12:14:39.404246   69537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 12:14:39.404262   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHHostname
	I0722 12:14:39.408111   69537 main.go:141] libmachine: (calico-511820) DBG | domain calico-511820 has defined MAC address 52:54:00:76:89:11 in network mk-calico-511820
	I0722 12:14:39.408802   69537 main.go:141] libmachine: (calico-511820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:89:11", ip: ""} in network mk-calico-511820: {Iface:virbr1 ExpiryTime:2024-07-22 13:13:59 +0000 UTC Type:0 Mac:52:54:00:76:89:11 Iaid: IPaddr:192.168.61.14 Prefix:24 Hostname:calico-511820 Clientid:01:52:54:00:76:89:11}
	I0722 12:14:39.408820   69537 main.go:141] libmachine: (calico-511820) DBG | domain calico-511820 has defined IP address 192.168.61.14 and MAC address 52:54:00:76:89:11 in network mk-calico-511820
	I0722 12:14:39.409031   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHPort
	I0722 12:14:39.411595   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHKeyPath
	I0722 12:14:39.411771   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHUsername
	I0722 12:14:39.411920   69537 sshutil.go:53] new ssh client: &{IP:192.168.61.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/calico-511820/id_rsa Username:docker}
	I0722 12:14:39.422369   69537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0722 12:14:39.422792   69537 main.go:141] libmachine: () Calling .GetVersion
	I0722 12:14:39.423353   69537 main.go:141] libmachine: Using API Version  1
	I0722 12:14:39.423373   69537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 12:14:39.423720   69537 main.go:141] libmachine: () Calling .GetMachineName
	I0722 12:14:39.423959   69537 main.go:141] libmachine: (calico-511820) Calling .GetState
	I0722 12:14:39.425709   69537 main.go:141] libmachine: (calico-511820) Calling .DriverName
	I0722 12:14:39.425953   69537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 12:14:39.425970   69537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 12:14:39.425986   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHHostname
	I0722 12:14:39.432492   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHPort
	I0722 12:14:39.432573   69537 main.go:141] libmachine: (calico-511820) DBG | domain calico-511820 has defined MAC address 52:54:00:76:89:11 in network mk-calico-511820
	I0722 12:14:39.432590   69537 main.go:141] libmachine: (calico-511820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:89:11", ip: ""} in network mk-calico-511820: {Iface:virbr1 ExpiryTime:2024-07-22 13:13:59 +0000 UTC Type:0 Mac:52:54:00:76:89:11 Iaid: IPaddr:192.168.61.14 Prefix:24 Hostname:calico-511820 Clientid:01:52:54:00:76:89:11}
	I0722 12:14:39.432617   69537 main.go:141] libmachine: (calico-511820) DBG | domain calico-511820 has defined IP address 192.168.61.14 and MAC address 52:54:00:76:89:11 in network mk-calico-511820
	I0722 12:14:39.432893   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHKeyPath
	I0722 12:14:39.433070   69537 main.go:141] libmachine: (calico-511820) Calling .GetSSHUsername
	I0722 12:14:39.433231   69537 sshutil.go:53] new ssh client: &{IP:192.168.61.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/calico-511820/id_rsa Username:docker}
	I0722 12:14:39.759988   69537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 12:14:39.760212   69537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 12:14:39.878578   69537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 12:14:39.935268   69537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 12:14:40.715844   69537 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0722 12:14:40.717349   69537 node_ready.go:35] waiting up to 15m0s for node "calico-511820" to be "Ready" ...
	I0722 12:14:40.951948   69537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073336213s)
	I0722 12:14:40.951991   69537 main.go:141] libmachine: Making call to close driver server
	I0722 12:14:40.952003   69537 main.go:141] libmachine: (calico-511820) Calling .Close
	I0722 12:14:40.952091   69537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016798357s)
	I0722 12:14:40.952108   69537 main.go:141] libmachine: Making call to close driver server
	I0722 12:14:40.952116   69537 main.go:141] libmachine: (calico-511820) Calling .Close
	I0722 12:14:40.952648   69537 main.go:141] libmachine: (calico-511820) DBG | Closing plugin on server side
	I0722 12:14:40.952692   69537 main.go:141] libmachine: Successfully made call to close driver server
	I0722 12:14:40.952700   69537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 12:14:40.952708   69537 main.go:141] libmachine: Making call to close driver server
	I0722 12:14:40.952722   69537 main.go:141] libmachine: (calico-511820) Calling .Close
	I0722 12:14:40.952808   69537 main.go:141] libmachine: Successfully made call to close driver server
	I0722 12:14:40.952810   69537 main.go:141] libmachine: (calico-511820) DBG | Closing plugin on server side
	I0722 12:14:40.952820   69537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 12:14:40.952829   69537 main.go:141] libmachine: Making call to close driver server
	I0722 12:14:40.952837   69537 main.go:141] libmachine: (calico-511820) Calling .Close
	I0722 12:14:40.954542   69537 main.go:141] libmachine: (calico-511820) DBG | Closing plugin on server side
	I0722 12:14:40.954548   69537 main.go:141] libmachine: Successfully made call to close driver server
	I0722 12:14:40.954549   69537 main.go:141] libmachine: (calico-511820) DBG | Closing plugin on server side
	I0722 12:14:40.954563   69537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 12:14:40.954576   69537 main.go:141] libmachine: Successfully made call to close driver server
	I0722 12:14:40.954584   69537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 12:14:40.969139   69537 main.go:141] libmachine: Making call to close driver server
	I0722 12:14:40.969165   69537 main.go:141] libmachine: (calico-511820) Calling .Close
	I0722 12:14:40.970991   69537 main.go:141] libmachine: (calico-511820) DBG | Closing plugin on server side
	I0722 12:14:40.971038   69537 main.go:141] libmachine: Successfully made call to close driver server
	I0722 12:14:40.971047   69537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 12:14:40.972652   69537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 12:14:40.956797   71664 config.go:182] Loaded profile config "calico-511820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:14:40.956941   71664 config.go:182] Loaded profile config "custom-flannel-511820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:14:40.957062   71664 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:14:40.957205   71664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 12:14:41.004714   71664 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 12:14:41.005933   71664 start.go:297] selected driver: kvm2
	I0722 12:14:41.005960   71664 start.go:901] validating driver "kvm2" against <nil>
	I0722 12:14:41.005976   71664 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 12:14:41.007091   71664 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:14:41.007215   71664 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 12:14:41.024903   71664 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 12:14:41.024958   71664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0722 12:14:41.025260   71664 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0722 12:14:41.025300   71664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 12:14:41.025335   71664 cni.go:84] Creating CNI manager for "bridge"
	I0722 12:14:41.025343   71664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 12:14:41.025421   71664 start.go:340] cluster config:
	{Name:enable-default-cni-511820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 12:14:41.025615   71664 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:14:41.027133   71664 out.go:177] * Starting "enable-default-cni-511820" primary control-plane node in "enable-default-cni-511820" cluster
	I0722 12:14:41.028420   71664 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 12:14:41.028475   71664 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 12:14:41.028499   71664 cache.go:56] Caching tarball of preloaded images
	I0722 12:14:41.028587   71664 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 12:14:41.028604   71664 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 12:14:41.028732   71664 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/enable-default-cni-511820/config.json ...
	I0722 12:14:41.028758   71664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/enable-default-cni-511820/config.json: {Name:mkb03230a9c65e2a72379e9e60a204174bdd9b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:41.028925   71664 start.go:360] acquireMachinesLock for enable-default-cni-511820: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 12:14:41.028975   71664 start.go:364] duration metric: took 26.84µs to acquireMachinesLock for "enable-default-cni-511820"
	I0722 12:14:41.028996   71664 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-511820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 12:14:41.029081   71664 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 12:14:38.712409   69808 main.go:141] libmachine: (custom-flannel-511820) Calling .GetIP
	I0722 12:14:38.716053   69808 main.go:141] libmachine: (custom-flannel-511820) DBG | domain custom-flannel-511820 has defined MAC address 52:54:00:25:13:7a in network mk-custom-flannel-511820
	I0722 12:14:38.716488   69808 main.go:141] libmachine: (custom-flannel-511820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:13:7a", ip: ""} in network mk-custom-flannel-511820: {Iface:virbr3 ExpiryTime:2024-07-22 13:14:24 +0000 UTC Type:0 Mac:52:54:00:25:13:7a Iaid: IPaddr:192.168.72.184 Prefix:24 Hostname:custom-flannel-511820 Clientid:01:52:54:00:25:13:7a}
	I0722 12:14:38.716517   69808 main.go:141] libmachine: (custom-flannel-511820) DBG | domain custom-flannel-511820 has defined IP address 192.168.72.184 and MAC address 52:54:00:25:13:7a in network mk-custom-flannel-511820
	I0722 12:14:38.716757   69808 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 12:14:38.721307   69808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 12:14:38.738085   69808 kubeadm.go:883] updating cluster {Name:custom-flannel-511820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 12:14:38.738181   69808 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 12:14:38.738222   69808 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 12:14:38.782819   69808 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 12:14:38.782870   69808 ssh_runner.go:195] Run: which lz4
	I0722 12:14:38.788186   69808 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 12:14:38.793876   69808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 12:14:38.793897   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 12:14:40.458389   69808 crio.go:462] duration metric: took 1.670228838s to copy over tarball
	I0722 12:14:40.458455   69808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 12:14:40.973866   69537 addons.go:510] duration metric: took 1.624714669s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 12:14:41.222791   69537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-511820" context rescaled to 1 replicas
	I0722 12:14:42.721109   69537 node_ready.go:53] node "calico-511820" has status "Ready":"False"
	I0722 12:14:41.030669   71664 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 12:14:41.030836   71664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 12:14:41.030878   71664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 12:14:41.045987   71664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0722 12:14:41.046516   71664 main.go:141] libmachine: () Calling .GetVersion
	I0722 12:14:41.047059   71664 main.go:141] libmachine: Using API Version  1
	I0722 12:14:41.047079   71664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 12:14:41.047459   71664 main.go:141] libmachine: () Calling .GetMachineName
	I0722 12:14:41.047649   71664 main.go:141] libmachine: (enable-default-cni-511820) Calling .GetMachineName
	I0722 12:14:41.047804   71664 main.go:141] libmachine: (enable-default-cni-511820) Calling .DriverName
	I0722 12:14:41.047969   71664 start.go:159] libmachine.API.Create for "enable-default-cni-511820" (driver="kvm2")
	I0722 12:14:41.047993   71664 client.go:168] LocalClient.Create starting
	I0722 12:14:41.048020   71664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 12:14:41.048050   71664 main.go:141] libmachine: Decoding PEM data...
	I0722 12:14:41.048066   71664 main.go:141] libmachine: Parsing certificate...
	I0722 12:14:41.048109   71664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 12:14:41.048125   71664 main.go:141] libmachine: Decoding PEM data...
	I0722 12:14:41.048135   71664 main.go:141] libmachine: Parsing certificate...
	I0722 12:14:41.048154   71664 main.go:141] libmachine: Running pre-create checks...
	I0722 12:14:41.048162   71664 main.go:141] libmachine: (enable-default-cni-511820) Calling .PreCreateCheck
	I0722 12:14:41.048578   71664 main.go:141] libmachine: (enable-default-cni-511820) Calling .GetConfigRaw
	I0722 12:14:41.048943   71664 main.go:141] libmachine: Creating machine...
	I0722 12:14:41.048954   71664 main.go:141] libmachine: (enable-default-cni-511820) Calling .Create
	I0722 12:14:41.049102   71664 main.go:141] libmachine: (enable-default-cni-511820) Creating KVM machine...
	I0722 12:14:41.050341   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | found existing default KVM network
	I0722 12:14:41.051397   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.051256   71701 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:bb:47} reservation:<nil>}
	I0722 12:14:41.052514   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.052420   71701 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012df80}
	I0722 12:14:41.052535   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | created network xml: 
	I0722 12:14:41.052547   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | <network>
	I0722 12:14:41.052555   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   <name>mk-enable-default-cni-511820</name>
	I0722 12:14:41.052565   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   <dns enable='no'/>
	I0722 12:14:41.052571   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   
	I0722 12:14:41.052581   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0722 12:14:41.052592   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |     <dhcp>
	I0722 12:14:41.052606   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0722 12:14:41.052617   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |     </dhcp>
	I0722 12:14:41.052678   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   </ip>
	I0722 12:14:41.052696   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG |   
	I0722 12:14:41.052708   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | </network>
	I0722 12:14:41.052723   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | 
	I0722 12:14:41.057650   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | trying to create private KVM network mk-enable-default-cni-511820 192.168.50.0/24...
	I0722 12:14:41.149009   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | private KVM network mk-enable-default-cni-511820 192.168.50.0/24 created
	I0722 12:14:41.149190   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820 ...
	I0722 12:14:41.149326   71664 main.go:141] libmachine: (enable-default-cni-511820) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 12:14:41.149487   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.149418   71701 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:14:41.153643   71664 main.go:141] libmachine: (enable-default-cni-511820) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 12:14:41.467124   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.467000   71701 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820/id_rsa...
	I0722 12:14:41.725816   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.725688   71701 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820/enable-default-cni-511820.rawdisk...
	I0722 12:14:41.725849   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Writing magic tar header
	I0722 12:14:41.725867   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Writing SSH key tar header
	I0722 12:14:41.725880   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:41.725832   71701 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820 ...
	I0722 12:14:41.725962   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820
	I0722 12:14:41.726008   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820 (perms=drwx------)
	I0722 12:14:41.726031   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 12:14:41.726045   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 12:14:41.726066   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:14:41.726080   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 12:14:41.726096   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 12:14:41.726106   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home/jenkins
	I0722 12:14:41.726128   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Checking permissions on dir: /home
	I0722 12:14:41.726149   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 12:14:41.726163   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 12:14:41.726173   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 12:14:41.726185   71664 main.go:141] libmachine: (enable-default-cni-511820) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 12:14:41.726193   71664 main.go:141] libmachine: (enable-default-cni-511820) Creating domain...
	I0722 12:14:41.726203   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | Skipping /home - not owner
	I0722 12:14:41.727562   71664 main.go:141] libmachine: (enable-default-cni-511820) define libvirt domain using xml: 
	I0722 12:14:41.727586   71664 main.go:141] libmachine: (enable-default-cni-511820) <domain type='kvm'>
	I0722 12:14:41.727611   71664 main.go:141] libmachine: (enable-default-cni-511820)   <name>enable-default-cni-511820</name>
	I0722 12:14:41.727624   71664 main.go:141] libmachine: (enable-default-cni-511820)   <memory unit='MiB'>3072</memory>
	I0722 12:14:41.727636   71664 main.go:141] libmachine: (enable-default-cni-511820)   <vcpu>2</vcpu>
	I0722 12:14:41.727647   71664 main.go:141] libmachine: (enable-default-cni-511820)   <features>
	I0722 12:14:41.727670   71664 main.go:141] libmachine: (enable-default-cni-511820)     <acpi/>
	I0722 12:14:41.727684   71664 main.go:141] libmachine: (enable-default-cni-511820)     <apic/>
	I0722 12:14:41.727835   71664 main.go:141] libmachine: (enable-default-cni-511820)     <pae/>
	I0722 12:14:41.727867   71664 main.go:141] libmachine: (enable-default-cni-511820)     
	I0722 12:14:41.727906   71664 main.go:141] libmachine: (enable-default-cni-511820)   </features>
	I0722 12:14:41.727927   71664 main.go:141] libmachine: (enable-default-cni-511820)   <cpu mode='host-passthrough'>
	I0722 12:14:41.727949   71664 main.go:141] libmachine: (enable-default-cni-511820)   
	I0722 12:14:41.727959   71664 main.go:141] libmachine: (enable-default-cni-511820)   </cpu>
	I0722 12:14:41.727969   71664 main.go:141] libmachine: (enable-default-cni-511820)   <os>
	I0722 12:14:41.727977   71664 main.go:141] libmachine: (enable-default-cni-511820)     <type>hvm</type>
	I0722 12:14:41.727987   71664 main.go:141] libmachine: (enable-default-cni-511820)     <boot dev='cdrom'/>
	I0722 12:14:41.727994   71664 main.go:141] libmachine: (enable-default-cni-511820)     <boot dev='hd'/>
	I0722 12:14:41.728027   71664 main.go:141] libmachine: (enable-default-cni-511820)     <bootmenu enable='no'/>
	I0722 12:14:41.728040   71664 main.go:141] libmachine: (enable-default-cni-511820)   </os>
	I0722 12:14:41.728050   71664 main.go:141] libmachine: (enable-default-cni-511820)   <devices>
	I0722 12:14:41.728060   71664 main.go:141] libmachine: (enable-default-cni-511820)     <disk type='file' device='cdrom'>
	I0722 12:14:41.728074   71664 main.go:141] libmachine: (enable-default-cni-511820)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820/boot2docker.iso'/>
	I0722 12:14:41.728085   71664 main.go:141] libmachine: (enable-default-cni-511820)       <target dev='hdc' bus='scsi'/>
	I0722 12:14:41.728094   71664 main.go:141] libmachine: (enable-default-cni-511820)       <readonly/>
	I0722 12:14:41.728118   71664 main.go:141] libmachine: (enable-default-cni-511820)     </disk>
	I0722 12:14:41.728132   71664 main.go:141] libmachine: (enable-default-cni-511820)     <disk type='file' device='disk'>
	I0722 12:14:41.728144   71664 main.go:141] libmachine: (enable-default-cni-511820)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 12:14:41.728158   71664 main.go:141] libmachine: (enable-default-cni-511820)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/enable-default-cni-511820/enable-default-cni-511820.rawdisk'/>
	I0722 12:14:41.728188   71664 main.go:141] libmachine: (enable-default-cni-511820)       <target dev='hda' bus='virtio'/>
	I0722 12:14:41.728203   71664 main.go:141] libmachine: (enable-default-cni-511820)     </disk>
	I0722 12:14:41.728217   71664 main.go:141] libmachine: (enable-default-cni-511820)     <interface type='network'>
	I0722 12:14:41.728242   71664 main.go:141] libmachine: (enable-default-cni-511820)       <source network='mk-enable-default-cni-511820'/>
	I0722 12:14:41.728256   71664 main.go:141] libmachine: (enable-default-cni-511820)       <model type='virtio'/>
	I0722 12:14:41.728268   71664 main.go:141] libmachine: (enable-default-cni-511820)     </interface>
	I0722 12:14:41.728279   71664 main.go:141] libmachine: (enable-default-cni-511820)     <interface type='network'>
	I0722 12:14:41.728290   71664 main.go:141] libmachine: (enable-default-cni-511820)       <source network='default'/>
	I0722 12:14:41.728304   71664 main.go:141] libmachine: (enable-default-cni-511820)       <model type='virtio'/>
	I0722 12:14:41.728315   71664 main.go:141] libmachine: (enable-default-cni-511820)     </interface>
	I0722 12:14:41.728327   71664 main.go:141] libmachine: (enable-default-cni-511820)     <serial type='pty'>
	I0722 12:14:41.728339   71664 main.go:141] libmachine: (enable-default-cni-511820)       <target port='0'/>
	I0722 12:14:41.728353   71664 main.go:141] libmachine: (enable-default-cni-511820)     </serial>
	I0722 12:14:41.728362   71664 main.go:141] libmachine: (enable-default-cni-511820)     <console type='pty'>
	I0722 12:14:41.728377   71664 main.go:141] libmachine: (enable-default-cni-511820)       <target type='serial' port='0'/>
	I0722 12:14:41.728409   71664 main.go:141] libmachine: (enable-default-cni-511820)     </console>
	I0722 12:14:41.728420   71664 main.go:141] libmachine: (enable-default-cni-511820)     <rng model='virtio'>
	I0722 12:14:41.728436   71664 main.go:141] libmachine: (enable-default-cni-511820)       <backend model='random'>/dev/random</backend>
	I0722 12:14:41.728450   71664 main.go:141] libmachine: (enable-default-cni-511820)     </rng>
	I0722 12:14:41.728459   71664 main.go:141] libmachine: (enable-default-cni-511820)     
	I0722 12:14:41.728471   71664 main.go:141] libmachine: (enable-default-cni-511820)     
	I0722 12:14:41.728483   71664 main.go:141] libmachine: (enable-default-cni-511820)   </devices>
	I0722 12:14:41.728494   71664 main.go:141] libmachine: (enable-default-cni-511820) </domain>
	I0722 12:14:41.728502   71664 main.go:141] libmachine: (enable-default-cni-511820) 
	I0722 12:14:41.733202   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:1f:f1:32 in network default
	I0722 12:14:41.733793   71664 main.go:141] libmachine: (enable-default-cni-511820) Ensuring networks are active...
	I0722 12:14:41.733833   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:41.734591   71664 main.go:141] libmachine: (enable-default-cni-511820) Ensuring network default is active
	I0722 12:14:41.735082   71664 main.go:141] libmachine: (enable-default-cni-511820) Ensuring network mk-enable-default-cni-511820 is active
	I0722 12:14:41.735742   71664 main.go:141] libmachine: (enable-default-cni-511820) Getting domain xml...
	I0722 12:14:41.736634   71664 main.go:141] libmachine: (enable-default-cni-511820) Creating domain...
	I0722 12:14:43.186982   71664 main.go:141] libmachine: (enable-default-cni-511820) Waiting to get IP...
	I0722 12:14:43.188273   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:43.188777   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:43.188822   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:43.188760   71701 retry.go:31] will retry after 299.174916ms: waiting for machine to come up
	I0722 12:14:43.489238   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:43.489674   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:43.489700   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:43.489646   71701 retry.go:31] will retry after 349.551035ms: waiting for machine to come up
	I0722 12:14:43.841163   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:43.841707   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:43.841724   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:43.841676   71701 retry.go:31] will retry after 325.05746ms: waiting for machine to come up
	I0722 12:14:44.168145   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:44.168593   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:44.168615   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:44.168544   71701 retry.go:31] will retry after 446.257296ms: waiting for machine to come up
	I0722 12:14:44.616078   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:44.616578   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:44.616603   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:44.616551   71701 retry.go:31] will retry after 544.905283ms: waiting for machine to come up
	I0722 12:14:45.163459   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:45.163915   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:45.163956   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:45.163891   71701 retry.go:31] will retry after 630.518097ms: waiting for machine to come up
	I0722 12:14:45.796277   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:45.796975   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:45.797103   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:45.797052   71701 retry.go:31] will retry after 1.035411906s: waiting for machine to come up
	I0722 12:14:43.308439   69808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.849956673s)
	I0722 12:14:43.308470   69808 crio.go:469] duration metric: took 2.850053218s to extract the tarball
	I0722 12:14:43.308479   69808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 12:14:43.356702   69808 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 12:14:43.424074   69808 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 12:14:43.424104   69808 cache_images.go:84] Images are preloaded, skipping loading
	I0722 12:14:43.424114   69808 kubeadm.go:934] updating node { 192.168.72.184 8443 v1.30.3 crio true true} ...
	I0722 12:14:43.424231   69808 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-511820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0722 12:14:43.424305   69808 ssh_runner.go:195] Run: crio config
	I0722 12:14:43.487026   69808 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0722 12:14:43.487066   69808 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 12:14:43.487095   69808 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.184 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-511820 NodeName:custom-flannel-511820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 12:14:43.487289   69808 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-511820"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 12:14:43.487359   69808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 12:14:43.498864   69808 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 12:14:43.498930   69808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 12:14:43.510051   69808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0722 12:14:43.527479   69808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 12:14:43.547878   69808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0722 12:14:43.569594   69808 ssh_runner.go:195] Run: grep 192.168.72.184	control-plane.minikube.internal$ /etc/hosts
	I0722 12:14:43.574721   69808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 12:14:43.593519   69808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 12:14:43.751765   69808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 12:14:43.771436   69808 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820 for IP: 192.168.72.184
	I0722 12:14:43.771459   69808 certs.go:194] generating shared ca certs ...
	I0722 12:14:43.771478   69808 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:43.771637   69808 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 12:14:43.771696   69808 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 12:14:43.771708   69808 certs.go:256] generating profile certs ...
	I0722 12:14:43.772033   69808 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.key
	I0722 12:14:43.772063   69808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.crt with IP's: []
	I0722 12:14:44.050563   69808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.crt ...
	I0722 12:14:44.050594   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.crt: {Name:mk874c3b19974c30740055e7836811e44b92dceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.050760   69808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.key ...
	I0722 12:14:44.050783   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/client.key: {Name:mk99b04d3bc4def0c06cbf4e1c312a56571286bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.050878   69808 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key.4b805f94
	I0722 12:14:44.050893   69808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt.4b805f94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.184]
	I0722 12:14:44.426881   69808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt.4b805f94 ...
	I0722 12:14:44.426916   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt.4b805f94: {Name:mk959cee571586068d52ebf9f00832b0ea83a54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.427092   69808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key.4b805f94 ...
	I0722 12:14:44.427111   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key.4b805f94: {Name:mkbcca5b53fb6d76f2728490ba34c88dbf2c599a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.427244   69808 certs.go:381] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt.4b805f94 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt
	I0722 12:14:44.427359   69808 certs.go:385] copying /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key.4b805f94 -> /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key
	I0722 12:14:44.427441   69808 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.key
	I0722 12:14:44.427464   69808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.crt with IP's: []
	I0722 12:14:44.695349   69808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.crt ...
	I0722 12:14:44.695387   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.crt: {Name:mk49a76c80d494d69d01523d2dc8d005a6107b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.695615   69808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.key ...
	I0722 12:14:44.695639   69808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.key: {Name:mk5f920191b42611a9dcd4ae1904edf94119162d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:14:44.695926   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 12:14:44.695978   69808 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 12:14:44.695986   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 12:14:44.696018   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 12:14:44.696065   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 12:14:44.696102   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 12:14:44.696165   69808 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 12:14:44.696798   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 12:14:44.751021   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 12:14:44.792257   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 12:14:44.834413   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 12:14:44.861294   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 12:14:44.889403   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 12:14:44.940699   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 12:14:44.979551   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/custom-flannel-511820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 12:14:45.016256   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 12:14:45.048315   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 12:14:45.079055   69808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 12:14:45.107835   69808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 12:14:45.126986   69808 ssh_runner.go:195] Run: openssl version
	I0722 12:14:45.134730   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 12:14:45.147255   69808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 12:14:45.152656   69808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 12:14:45.152713   69808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 12:14:45.159224   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 12:14:45.172663   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 12:14:45.186422   69808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 12:14:45.191677   69808 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 12:14:45.191740   69808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 12:14:45.198397   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 12:14:45.210940   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 12:14:45.223724   69808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 12:14:45.232628   69808 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 12:14:45.232698   69808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 12:14:45.240486   69808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 12:14:45.256194   69808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 12:14:45.261271   69808 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 12:14:45.261339   69808 kubeadm.go:392] StartCluster: {Name:custom-flannel-511820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:custom-flannel-511820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 12:14:45.261485   69808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 12:14:45.261585   69808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 12:14:45.304339   69808 cri.go:89] found id: ""
	I0722 12:14:45.304439   69808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 12:14:45.320073   69808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 12:14:45.333130   69808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 12:14:45.349871   69808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 12:14:45.349892   69808 kubeadm.go:157] found existing configuration files:
	
	I0722 12:14:45.349944   69808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 12:14:45.365934   69808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 12:14:45.366003   69808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 12:14:45.384133   69808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 12:14:45.396217   69808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 12:14:45.396286   69808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 12:14:45.412820   69808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 12:14:45.428885   69808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 12:14:45.428956   69808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 12:14:45.445037   69808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 12:14:45.459131   69808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 12:14:45.459196   69808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 12:14:45.471849   69808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 12:14:45.557274   69808 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 12:14:45.557356   69808 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 12:14:45.723279   69808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 12:14:45.723455   69808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 12:14:45.723657   69808 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 12:14:45.944206   69808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 12:14:46.046009   69808 out.go:204]   - Generating certificates and keys ...
	I0722 12:14:46.046216   69808 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 12:14:46.046320   69808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 12:14:46.046447   69808 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 12:14:46.088518   69808 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 12:14:46.169690   69808 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 12:14:46.426493   69808 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 12:14:46.514606   69808 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 12:14:46.514984   69808 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-511820 localhost] and IPs [192.168.72.184 127.0.0.1 ::1]
	I0722 12:14:46.585809   69808 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 12:14:46.586146   69808 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-511820 localhost] and IPs [192.168.72.184 127.0.0.1 ::1]
	I0722 12:14:46.686078   69808 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 12:14:46.959130   69808 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 12:14:47.112638   69808 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 12:14:47.112887   69808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 12:14:47.279792   69808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 12:14:47.439785   69808 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 12:14:47.526609   69808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 12:14:47.850058   69808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 12:14:47.971898   69808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 12:14:47.972567   69808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 12:14:47.979441   69808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 12:14:44.721876   69537 node_ready.go:53] node "calico-511820" has status "Ready":"False"
	I0722 12:14:47.220785   69537 node_ready.go:53] node "calico-511820" has status "Ready":"False"
	I0722 12:14:47.720790   69537 node_ready.go:49] node "calico-511820" has status "Ready":"True"
	I0722 12:14:47.720822   69537 node_ready.go:38] duration metric: took 7.003442638s for node "calico-511820" to be "Ready" ...
	I0722 12:14:47.720834   69537 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 12:14:47.735234   69537 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-f88hd" in "kube-system" namespace to be "Ready" ...
	I0722 12:14:46.834762   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:46.835386   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:46.835411   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:46.835321   71701 retry.go:31] will retry after 1.471332812s: waiting for machine to come up
	I0722 12:14:48.308116   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:48.308621   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:48.308657   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:48.308576   71701 retry.go:31] will retry after 1.459920449s: waiting for machine to come up
	I0722 12:14:49.770595   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:49.771158   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:49.771204   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:49.771115   71701 retry.go:31] will retry after 2.117101545s: waiting for machine to come up
	I0722 12:14:47.981016   69808 out.go:204]   - Booting up control plane ...
	I0722 12:14:47.981138   69808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 12:14:47.981252   69808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 12:14:47.982024   69808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 12:14:48.004478   69808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 12:14:48.005579   69808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 12:14:48.005660   69808 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 12:14:48.192936   69808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 12:14:48.193083   69808 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 12:14:48.694619   69808 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.99466ms
	I0722 12:14:48.694821   69808 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 12:14:49.743681   69537 pod_ready.go:102] pod "calico-kube-controllers-564985c589-f88hd" in "kube-system" namespace has status "Ready":"False"
	I0722 12:14:52.248608   69537 pod_ready.go:102] pod "calico-kube-controllers-564985c589-f88hd" in "kube-system" namespace has status "Ready":"False"
	I0722 12:14:55.192900   69808 kubeadm.go:310] [api-check] The API server is healthy after 6.501234782s
	I0722 12:14:55.212846   69808 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 12:14:55.225790   69808 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 12:14:55.260363   69808 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 12:14:55.260647   69808 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-511820 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 12:14:55.271655   69808 kubeadm.go:310] [bootstrap-token] Using token: pycd72.sbavd6asoujqtzwt
	I0722 12:14:51.889620   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:51.890044   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:51.890069   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:51.890013   71701 retry.go:31] will retry after 1.986409088s: waiting for machine to come up
	I0722 12:14:53.878131   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | domain enable-default-cni-511820 has defined MAC address 52:54:00:4e:68:bb in network mk-enable-default-cni-511820
	I0722 12:14:53.879607   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | unable to find current IP address of domain enable-default-cni-511820 in network mk-enable-default-cni-511820
	I0722 12:14:53.879652   71664 main.go:141] libmachine: (enable-default-cni-511820) DBG | I0722 12:14:53.879566   71701 retry.go:31] will retry after 2.357629561s: waiting for machine to come up
	I0722 12:14:55.273039   69808 out.go:204]   - Configuring RBAC rules ...
	I0722 12:14:55.273199   69808 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 12:14:55.282068   69808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 12:14:55.290355   69808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 12:14:55.294058   69808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 12:14:55.297729   69808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 12:14:55.300549   69808 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 12:14:55.599337   69808 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 12:14:56.060850   69808 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 12:14:56.600055   69808 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 12:14:56.601155   69808 kubeadm.go:310] 
	I0722 12:14:56.601235   69808 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 12:14:56.601254   69808 kubeadm.go:310] 
	I0722 12:14:56.601389   69808 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 12:14:56.601400   69808 kubeadm.go:310] 
	I0722 12:14:56.601430   69808 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 12:14:56.601510   69808 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 12:14:56.601601   69808 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 12:14:56.601622   69808 kubeadm.go:310] 
	I0722 12:14:56.601690   69808 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 12:14:56.601701   69808 kubeadm.go:310] 
	I0722 12:14:56.601758   69808 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 12:14:56.601767   69808 kubeadm.go:310] 
	I0722 12:14:56.601842   69808 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 12:14:56.601973   69808 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 12:14:56.602074   69808 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 12:14:56.602096   69808 kubeadm.go:310] 
	I0722 12:14:56.602218   69808 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 12:14:56.602331   69808 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 12:14:56.602346   69808 kubeadm.go:310] 
	I0722 12:14:56.602499   69808 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pycd72.sbavd6asoujqtzwt \
	I0722 12:14:56.602674   69808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 12:14:56.602716   69808 kubeadm.go:310] 	--control-plane 
	I0722 12:14:56.602726   69808 kubeadm.go:310] 
	I0722 12:14:56.602835   69808 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 12:14:56.602846   69808 kubeadm.go:310] 
	I0722 12:14:56.602956   69808 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pycd72.sbavd6asoujqtzwt \
	I0722 12:14:56.603107   69808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 12:14:56.603586   69808 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 12:14:56.603625   69808 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0722 12:14:56.605575   69808 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0722 12:14:56.607098   69808 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 12:14:56.607153   69808 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0722 12:14:56.613068   69808 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0722 12:14:56.613097   69808 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0722 12:14:56.641698   69808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 12:14:57.196501   69808 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 12:14:57.196595   69808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 12:14:57.196649   69808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-511820 minikube.k8s.io/updated_at=2024_07_22T12_14_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=custom-flannel-511820 minikube.k8s.io/primary=true
	
	
	==> CRI-O <==
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.709663549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650497709631992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09aef0d0-8c8a-4c15-8f1b-c92239fb60f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.712879725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94b0a405-97e3-43f3-bdb2-d54429b57530 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.713003487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94b0a405-97e3-43f3-bdb2-d54429b57530 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.713387298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94b0a405-97e3-43f3-bdb2-d54429b57530 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.755200738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70896a41-4761-4b7e-a874-ef2571145933 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.755313954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70896a41-4761-4b7e-a874-ef2571145933 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.756695627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91570f8b-79a9-465d-b73f-42ab5efa66fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.757289711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650497757258186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91570f8b-79a9-465d-b73f-42ab5efa66fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.757864189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=149fd02d-aab6-4769-8db7-c55e5eb9e37f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.758090649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=149fd02d-aab6-4769-8db7-c55e5eb9e37f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.758358638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=149fd02d-aab6-4769-8db7-c55e5eb9e37f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.805015518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c165e455-025b-4aab-a389-ca773477fcfb name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.805125548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c165e455-025b-4aab-a389-ca773477fcfb name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.806983659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8123334e-363e-494c-9fff-6ba23e9980cf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.808008340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650497807974851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8123334e-363e-494c-9fff-6ba23e9980cf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.809180336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b27d8408-b781-40b2-8ad4-6e98e3c21390 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.809253166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b27d8408-b781-40b2-8ad4-6e98e3c21390 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.809730456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b27d8408-b781-40b2-8ad4-6e98e3c21390 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.855050214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fefae496-071b-4337-9b37-5cc3bd3ec8ae name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.855161224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fefae496-071b-4337-9b37-5cc3bd3ec8ae name=/runtime.v1.RuntimeService/Version
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.856826914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=400c6b07-42db-4ee7-96b0-4ae909e7ff54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.857365000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650497857336791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=400c6b07-42db-4ee7-96b0-4ae909e7ff54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.858302047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ba1ea54-4b5e-4481-b855-91bc83dcf603 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.858378659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ba1ea54-4b5e-4481-b855-91bc83dcf603 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:14:57 default-k8s-diff-port-605740 crio[726]: time="2024-07-22 12:14:57.859024618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf,PodSandboxId:51b648598da7047598c076549ba95030986bd416e59171441da669cfe73c381e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721649419477758362,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff4a3e-008c-4c4e-9eb3-281c46b10279,},Annotations:map[string]string{io.kubernetes.container.hash: a5c39666,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3,PodSandboxId:a842394945019e02b0c66e0d18ad7a4e806568746cf3e021fd8955367403fc57,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649419183771349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nlfgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02dda63-e71b-429f-b9d5-0b2ca40e8dcc,},Annotations:map[string]string{io.kubernetes.container.hash: c334977,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee,PodSandboxId:3ce3eb65599812c4e902ddaf2a7b2e3cef3fd6d7815616d5ff44b66ef66884ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649418896209316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tnnxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 337c6df7-035c-488d-a123-a410d76d836b,},Annotations:map[string]string{io.kubernetes.container.hash: b00a2bfa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012,PodSandboxId:d845155770c78ea7c0f688f16ad322a84f1d160bb8783aeeca71f5456b424101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,
CreatedAt:1721649418764148038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-58qcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c02c70-a840-410c-9d48-3d15a3927a77,},Annotations:map[string]string{io.kubernetes.container.hash: be0de0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2,PodSandboxId:55f657b008814eddbd2b6f4b56f1e79f07777d886ee2be08a9ab312dbf0a63e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721649398996484062,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 333910fd7a599754e228f3a02579e9b3,},Annotations:map[string]string{io.kubernetes.container.hash: a092ece8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185,PodSandboxId:e0a43c4765d9a23eab09f73458094fd64df000f341149d39181823a9dbc1f1a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721649398978204722,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed9adb69859978175606b15fe22afa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6,PodSandboxId:db6387d8039152bd6bb3da85188473d1b77a36ab086581d95bb8b957a1c8fce1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721649398939032395,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79412e76d5c889e0a6afa4ad891ae951,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87,PodSandboxId:7e247212480d9f05b1019b4c39331c892c0f7ff7e9c5fb23bc4acfe71ac60300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721649398872728178,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-605740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda9b2ba3aef6048a36128965540beb9,},Annotations:map[string]string{io.kubernetes.container.hash: f892f9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ba1ea54-4b5e-4481-b855-91bc83dcf603 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d62ba1b30907       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   51b648598da70       storage-provisioner
	5b8aefb11b4f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   a842394945019       coredns-7db6d8ff4d-nlfgl
	432537d466c8b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   3ce3eb6559981       coredns-7db6d8ff4d-tnnxf
	c615bf54ba394       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 minutes ago      Running             kube-proxy                0                   d845155770c78       kube-proxy-58qcp
	2a96c39f4a48d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Running             etcd                      2                   55f657b008814       etcd-default-k8s-diff-port-605740
	9561f587825f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   18 minutes ago      Running             kube-scheduler            2                   e0a43c4765d9a       kube-scheduler-default-k8s-diff-port-605740
	52c792eb6ba9b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   18 minutes ago      Running             kube-controller-manager   2                   db6387d803915       kube-controller-manager-default-k8s-diff-port-605740
	ce42664a9cd36       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   18 minutes ago      Running             kube-apiserver            2                   7e247212480d9       kube-apiserver-default-k8s-diff-port-605740
	
	
	==> coredns [432537d466c8bee7c20a34a3c6a5a75a34037c950cddcf5f6fa16d56dc2819ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5b8aefb11b4f0ba3a4c72f3542be8c94f63fcde72512953ec948268091c82ac3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-605740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-605740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=default-k8s-diff-port-605740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-605740
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:12:20 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:12:20 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:12:20 +0000   Mon, 22 Jul 2024 11:56:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:12:20 +0000   Mon, 22 Jul 2024 11:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    default-k8s-diff-port-605740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fff1d262e8904b2ca6da869b38918cfa
	  System UUID:                fff1d262-e890-4b2c-a6da-869b38918cfa
	  Boot ID:                    afc6903b-aa25-43a8-bb6a-9fb2f2fad052
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-nlfgl                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-tnnxf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-605740                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-605740             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-605740    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-58qcp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-605740             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-569cc877fc-2xv7x                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node default-k8s-diff-port-605740 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-605740 event: Registered Node default-k8s-diff-port-605740 in Controller
	
	
	==> dmesg <==
	[  +0.042258] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.419009] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609954] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.202692] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.064059] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061316] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.213954] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.119542] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.315886] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.575435] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.066115] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.857451] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.604653] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.287302] kauditd_printk_skb: 50 callbacks suppressed
	[Jul22 11:52] kauditd_printk_skb: 27 callbacks suppressed
	[Jul22 11:56] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.796098] systemd-fstab-generator[3586]: Ignoring "noauto" option for root device
	[  +4.643419] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.408020] systemd-fstab-generator[3910]: Ignoring "noauto" option for root device
	[ +13.908306] systemd-fstab-generator[4105]: Ignoring "noauto" option for root device
	[  +0.097831] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 11:58] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [2a96c39f4a48d2da7db089ded0c452d0eb329605826cf1de6007c2ee945a1ea2] <==
	{"level":"info","ts":"2024-07-22T11:56:39.622826Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625596Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.625339Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:default-k8s-diff-port-605740 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:56:39.625354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:39.625363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:56:39.629604Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:39.629891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:56:39.629916Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:56:39.642092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:56:39.677464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	{"level":"info","ts":"2024-07-22T12:06:39.753229Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-07-22T12:06:39.763404Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":715,"took":"9.406927ms","hash":4120341149,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2301952,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-22T12:06:39.763503Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4120341149,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-07-22T12:11:39.761597Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-07-22T12:11:39.76692Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":958,"took":"4.82773ms","hash":1625877502,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-22T12:11:39.767007Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1625877502,"revision":958,"compact-revision":715}
	{"level":"info","ts":"2024-07-22T12:11:46.328807Z","caller":"traceutil/trace.go:171","msg":"trace[2130506053] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"138.583727ms","start":"2024-07-22T12:11:46.190188Z","end":"2024-07-22T12:11:46.328772Z","steps":["trace[2130506053] 'process raft request'  (duration: 138.451188ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:12:11.093997Z","caller":"traceutil/trace.go:171","msg":"trace[1644792069] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"114.516344ms","start":"2024-07-22T12:12:10.979467Z","end":"2024-07-22T12:12:11.093983Z","steps":["trace[1644792069] 'process raft request'  (duration: 114.406859ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:12:38.842483Z","caller":"traceutil/trace.go:171","msg":"trace[1039004082] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"111.865603ms","start":"2024-07-22T12:12:38.730587Z","end":"2024-07-22T12:12:38.842453Z","steps":["trace[1039004082] 'read index received'  (duration: 111.610633ms)","trace[1039004082] 'applied index is now lower than readState.Index'  (duration: 254.3µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T12:12:38.842801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.118814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T12:12:38.842914Z","caller":"traceutil/trace.go:171","msg":"trace[799416993] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:1251; }","duration":"112.423247ms","start":"2024-07-22T12:12:38.730479Z","end":"2024-07-22T12:12:38.842903Z","steps":["trace[799416993] 'agreement among raft nodes before linearized reading'  (duration: 112.16337ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:12:38.843101Z","caller":"traceutil/trace.go:171","msg":"trace[1390197022] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"137.721162ms","start":"2024-07-22T12:12:38.70537Z","end":"2024-07-22T12:12:38.843092Z","steps":["trace[1390197022] 'process raft request'  (duration: 136.905157ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:14:43.762796Z","caller":"traceutil/trace.go:171","msg":"trace[177632536] transaction","detail":"{read_only:false; response_revision:1352; number_of_response:1; }","duration":"126.422871ms","start":"2024-07-22T12:14:43.636337Z","end":"2024-07-22T12:14:43.76276Z","steps":["trace[177632536] 'process raft request'  (duration: 126.295304ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T12:14:49.902069Z","caller":"traceutil/trace.go:171","msg":"trace[1714030266] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"106.77372ms","start":"2024-07-22T12:14:49.795276Z","end":"2024-07-22T12:14:49.902049Z","steps":["trace[1714030266] 'process raft request'  (duration: 106.367171ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:14:58 up 23 min,  0 users,  load average: 0.66, 0.33, 0.24
	Linux default-k8s-diff-port-605740 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ce42664a9cd3636fa44510de90daf5f0c9082d77580af09c0515ae2359f3fc87] <==
	I0722 12:09:42.600854       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:11:41.602352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:41.602561       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 12:11:42.603186       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:42.603287       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:11:42.603294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:11:42.603367       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:11:42.603383       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:11:42.604612       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:12:42.604083       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:12:42.604331       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:12:42.604372       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:12:42.605474       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:12:42.605604       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:12:42.605659       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:14:42.605662       1 handler_proxy.go:93] no RequestInfo found in the context
	W0722 12:14:42.605791       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 12:14:42.605831       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 12:14:42.605845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0722 12:14:42.605931       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 12:14:42.607864       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [52c792eb6ba9b67b8c9990240703e3195a8bff6158dfeb0b2da84d7b08d61cd6] <==
	I0722 12:09:27.694595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:09:57.106043       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:09:57.702198       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:27.111096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:10:27.709604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:57.117346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:10:57.718119       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:11:27.123785       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:11:27.727907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:11:57.129213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:11:57.736389       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:12:27.134635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:12:27.745094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:12:57.140857       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:12:57.753372       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:13:01.030132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="271.195µs"
	I0722 12:13:15.027695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="273.81µs"
	E0722 12:13:27.146329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:13:27.762253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:13:57.151725       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:13:57.769978       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:14:27.157738       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:14:27.779345       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:14:57.164125       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 12:14:57.788674       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c615bf54ba39489d87267358018aced180e2d2ed4176890b07657c7f84888012] <==
	I0722 11:56:59.568697       1 server_linux.go:69] "Using iptables proxy"
	I0722 11:56:59.591062       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	I0722 11:56:59.675933       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 11:56:59.676101       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:56:59.676181       1 server_linux.go:165] "Using iptables Proxier"
	I0722 11:56:59.678791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 11:56:59.679000       1 server.go:872] "Version info" version="v1.30.3"
	I0722 11:56:59.679217       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:56:59.680897       1 config.go:192] "Starting service config controller"
	I0722 11:56:59.680974       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:56:59.681022       1 config.go:101] "Starting endpoint slice config controller"
	I0722 11:56:59.681039       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:56:59.681661       1 config.go:319] "Starting node config controller"
	I0722 11:56:59.682707       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:56:59.781711       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 11:56:59.781751       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:56:59.783191       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9561f587825f7b2f1ed0170773f5e1bb49b323711509e51aa110494a33e3d185] <==
	E0722 11:56:41.614010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:56:41.614016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:41.614278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 11:56:41.614287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:56:42.520833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:56:42.520890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 11:56:42.545114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:56:42.545165       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:56:42.580158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:56:42.580238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:56:42.589201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:56:42.589386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:56:42.600613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:42.600807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:42.620718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:56:42.620787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:56:42.685124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:56:42.685228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 11:56:42.742327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:56:42.743494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 11:56:42.864333       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:56:42.865599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 11:56:45.890047       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:12:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:12:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:12:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:12:47.026858    3917 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 12:12:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:12:47.026942    3917 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 12:12:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:12:47.027215    3917 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q5z4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-2xv7x_kube-system(7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 22 12:12:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:12:47.027279    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:13:01 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:01.012745    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:13:15 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:15.013076    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:13:29 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:29.011868    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:13:42 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:42.011959    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:13:44 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:44.029718    3917 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:13:44 default-k8s-diff-port-605740 kubelet[3917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:13:44 default-k8s-diff-port-605740 kubelet[3917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:13:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:13:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:13:56 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:13:56.013895    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:14:09 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:14:09.011952    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:14:20 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:14:20.011887    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:14:34 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:14:34.012578    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	Jul 22 12:14:44 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:14:44.027422    3917 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:14:44 default-k8s-diff-port-605740 kubelet[3917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:14:44 default-k8s-diff-port-605740 kubelet[3917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:14:44 default-k8s-diff-port-605740 kubelet[3917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:14:44 default-k8s-diff-port-605740 kubelet[3917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:14:47 default-k8s-diff-port-605740 kubelet[3917]: E0722 12:14:47.013356    3917 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2xv7x" podUID="7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a"
	
	
	==> storage-provisioner [5d62ba1b3090700c9cc4e355512f7cc3cd995f45c9a81380d21e6f65141f4edf] <==
	I0722 11:56:59.627682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:56:59.636888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:56:59.637111       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:56:59.645866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:56:59.647207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0!
	I0722 11:56:59.650962       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef8bd77f-53d4-42a0-8994-dfd3795ed32f", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0 became leader
	I0722 11:56:59.748921       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-605740_172d62d7-8605-4bf3-8185-6dec47d6d8e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2xv7x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x: exit status 1 (61.306316ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2xv7x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-605740 describe pod metrics-server-569cc877fc-2xv7x: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (533.60s)
E0722 12:16:02.519532   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (298.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-339929 -n no-preload-339929
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 12:11:26.463839513 +0000 UTC m=+6157.191253851
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-339929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-339929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.607µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-339929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-339929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-339929 logs -n 25: (1.559599126s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 12:11 UTC | 22 Jul 24 12:11 UTC |
	| start   | -p newest-cni-355657 --memory=2200 --alsologtostderr   | newest-cni-355657            | jenkins | v1.33.1 | 22 Jul 24 12:11 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 12:11:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 12:11:12.902787   66110 out.go:291] Setting OutFile to fd 1 ...
	I0722 12:11:12.902894   66110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:11:12.902902   66110 out.go:304] Setting ErrFile to fd 2...
	I0722 12:11:12.902905   66110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 12:11:12.903079   66110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 12:11:12.903589   66110 out.go:298] Setting JSON to false
	I0722 12:11:12.904457   66110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6825,"bootTime":1721643448,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 12:11:12.904514   66110 start.go:139] virtualization: kvm guest
	I0722 12:11:12.907369   66110 out.go:177] * [newest-cni-355657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 12:11:12.908823   66110 notify.go:220] Checking for updates...
	I0722 12:11:12.908837   66110 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 12:11:12.910325   66110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 12:11:12.911671   66110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 12:11:12.912927   66110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:11:12.914084   66110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 12:11:12.915238   66110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 12:11:12.916656   66110 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:11:12.916774   66110 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 12:11:12.916916   66110 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 12:11:12.917018   66110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 12:11:12.955206   66110 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 12:11:12.956415   66110 start.go:297] selected driver: kvm2
	I0722 12:11:12.956436   66110 start.go:901] validating driver "kvm2" against <nil>
	I0722 12:11:12.956448   66110 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 12:11:12.957420   66110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:11:12.957497   66110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 12:11:12.971473   66110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 12:11:12.971510   66110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0722 12:11:12.971541   66110 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0722 12:11:12.971772   66110 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0722 12:11:12.971803   66110 cni.go:84] Creating CNI manager for ""
	I0722 12:11:12.971818   66110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 12:11:12.971835   66110 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 12:11:12.971903   66110 start.go:340] cluster config:
	{Name:newest-cni-355657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-355657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 12:11:12.972030   66110 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 12:11:12.973730   66110 out.go:177] * Starting "newest-cni-355657" primary control-plane node in "newest-cni-355657" cluster
	I0722 12:11:12.974795   66110 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 12:11:12.974821   66110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0722 12:11:12.974828   66110 cache.go:56] Caching tarball of preloaded images
	I0722 12:11:12.974889   66110 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 12:11:12.974906   66110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0722 12:11:12.974982   66110 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/newest-cni-355657/config.json ...
	I0722 12:11:12.975000   66110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/newest-cni-355657/config.json: {Name:mk93f0b86de5f7a65e30a3d62f110e932120829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 12:11:12.975122   66110 start.go:360] acquireMachinesLock for newest-cni-355657: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 12:11:12.975149   66110 start.go:364] duration metric: took 15.841µs to acquireMachinesLock for "newest-cni-355657"
	I0722 12:11:12.975164   66110 start.go:93] Provisioning new machine with config: &{Name:newest-cni-355657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-355657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 12:11:12.975215   66110 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 12:11:12.976679   66110 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 12:11:12.976802   66110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 12:11:12.976834   66110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 12:11:12.990752   66110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0722 12:11:12.991170   66110 main.go:141] libmachine: () Calling .GetVersion
	I0722 12:11:12.991744   66110 main.go:141] libmachine: Using API Version  1
	I0722 12:11:12.991767   66110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 12:11:12.992140   66110 main.go:141] libmachine: () Calling .GetMachineName
	I0722 12:11:12.992375   66110 main.go:141] libmachine: (newest-cni-355657) Calling .GetMachineName
	I0722 12:11:12.992551   66110 main.go:141] libmachine: (newest-cni-355657) Calling .DriverName
	I0722 12:11:12.992706   66110 start.go:159] libmachine.API.Create for "newest-cni-355657" (driver="kvm2")
	I0722 12:11:12.992734   66110 client.go:168] LocalClient.Create starting
	I0722 12:11:12.992768   66110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem
	I0722 12:11:12.992806   66110 main.go:141] libmachine: Decoding PEM data...
	I0722 12:11:12.992830   66110 main.go:141] libmachine: Parsing certificate...
	I0722 12:11:12.992890   66110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem
	I0722 12:11:12.992917   66110 main.go:141] libmachine: Decoding PEM data...
	I0722 12:11:12.992938   66110 main.go:141] libmachine: Parsing certificate...
	I0722 12:11:12.992963   66110 main.go:141] libmachine: Running pre-create checks...
	I0722 12:11:12.992981   66110 main.go:141] libmachine: (newest-cni-355657) Calling .PreCreateCheck
	I0722 12:11:12.993297   66110 main.go:141] libmachine: (newest-cni-355657) Calling .GetConfigRaw
	I0722 12:11:12.993664   66110 main.go:141] libmachine: Creating machine...
	I0722 12:11:12.993678   66110 main.go:141] libmachine: (newest-cni-355657) Calling .Create
	I0722 12:11:12.993813   66110 main.go:141] libmachine: (newest-cni-355657) Creating KVM machine...
	I0722 12:11:12.994965   66110 main.go:141] libmachine: (newest-cni-355657) DBG | found existing default KVM network
	I0722 12:11:12.996262   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:12.996084   66134 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:bb:47} reservation:<nil>}
	I0722 12:11:12.997424   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:12.997361   66134 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288970}
	I0722 12:11:12.997479   66110 main.go:141] libmachine: (newest-cni-355657) DBG | created network xml: 
	I0722 12:11:12.997501   66110 main.go:141] libmachine: (newest-cni-355657) DBG | <network>
	I0722 12:11:12.997523   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   <name>mk-newest-cni-355657</name>
	I0722 12:11:12.997540   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   <dns enable='no'/>
	I0722 12:11:12.997550   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   
	I0722 12:11:12.997562   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0722 12:11:12.997591   66110 main.go:141] libmachine: (newest-cni-355657) DBG |     <dhcp>
	I0722 12:11:12.997610   66110 main.go:141] libmachine: (newest-cni-355657) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0722 12:11:12.997636   66110 main.go:141] libmachine: (newest-cni-355657) DBG |     </dhcp>
	I0722 12:11:12.997657   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   </ip>
	I0722 12:11:12.997670   66110 main.go:141] libmachine: (newest-cni-355657) DBG |   
	I0722 12:11:12.997680   66110 main.go:141] libmachine: (newest-cni-355657) DBG | </network>
	I0722 12:11:12.997691   66110 main.go:141] libmachine: (newest-cni-355657) DBG | 
	I0722 12:11:13.002479   66110 main.go:141] libmachine: (newest-cni-355657) DBG | trying to create private KVM network mk-newest-cni-355657 192.168.50.0/24...
	I0722 12:11:13.070303   66110 main.go:141] libmachine: (newest-cni-355657) DBG | private KVM network mk-newest-cni-355657 192.168.50.0/24 created
	I0722 12:11:13.070351   66110 main.go:141] libmachine: (newest-cni-355657) Setting up store path in /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657 ...
	I0722 12:11:13.070374   66110 main.go:141] libmachine: (newest-cni-355657) Building disk image from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 12:11:13.070433   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:13.070359   66134 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:11:13.070535   66110 main.go:141] libmachine: (newest-cni-355657) Downloading /home/jenkins/minikube-integration/19313-5960/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 12:11:13.304259   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:13.304114   66134 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657/id_rsa...
	I0722 12:11:13.361968   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:13.361846   66134 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657/newest-cni-355657.rawdisk...
	I0722 12:11:13.362005   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Writing magic tar header
	I0722 12:11:13.362023   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Writing SSH key tar header
	I0722 12:11:13.362035   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:13.361969   66134 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657 ...
	I0722 12:11:13.362113   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657
	I0722 12:11:13.362193   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657 (perms=drwx------)
	I0722 12:11:13.362208   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube/machines
	I0722 12:11:13.362219   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube/machines (perms=drwxr-xr-x)
	I0722 12:11:13.362236   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960/.minikube (perms=drwxr-xr-x)
	I0722 12:11:13.362249   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins/minikube-integration/19313-5960 (perms=drwxrwxr-x)
	I0722 12:11:13.362262   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 12:11:13.362274   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 12:11:13.362294   66110 main.go:141] libmachine: (newest-cni-355657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 12:11:13.362306   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19313-5960
	I0722 12:11:13.362321   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 12:11:13.362333   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home/jenkins
	I0722 12:11:13.362347   66110 main.go:141] libmachine: (newest-cni-355657) Creating domain...
	I0722 12:11:13.362377   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Checking permissions on dir: /home
	I0722 12:11:13.362392   66110 main.go:141] libmachine: (newest-cni-355657) DBG | Skipping /home - not owner
	I0722 12:11:13.363451   66110 main.go:141] libmachine: (newest-cni-355657) define libvirt domain using xml: 
	I0722 12:11:13.363481   66110 main.go:141] libmachine: (newest-cni-355657) <domain type='kvm'>
	I0722 12:11:13.363495   66110 main.go:141] libmachine: (newest-cni-355657)   <name>newest-cni-355657</name>
	I0722 12:11:13.363504   66110 main.go:141] libmachine: (newest-cni-355657)   <memory unit='MiB'>2200</memory>
	I0722 12:11:13.363513   66110 main.go:141] libmachine: (newest-cni-355657)   <vcpu>2</vcpu>
	I0722 12:11:13.363524   66110 main.go:141] libmachine: (newest-cni-355657)   <features>
	I0722 12:11:13.363532   66110 main.go:141] libmachine: (newest-cni-355657)     <acpi/>
	I0722 12:11:13.363542   66110 main.go:141] libmachine: (newest-cni-355657)     <apic/>
	I0722 12:11:13.363557   66110 main.go:141] libmachine: (newest-cni-355657)     <pae/>
	I0722 12:11:13.363567   66110 main.go:141] libmachine: (newest-cni-355657)     
	I0722 12:11:13.363575   66110 main.go:141] libmachine: (newest-cni-355657)   </features>
	I0722 12:11:13.363585   66110 main.go:141] libmachine: (newest-cni-355657)   <cpu mode='host-passthrough'>
	I0722 12:11:13.363593   66110 main.go:141] libmachine: (newest-cni-355657)   
	I0722 12:11:13.363600   66110 main.go:141] libmachine: (newest-cni-355657)   </cpu>
	I0722 12:11:13.363605   66110 main.go:141] libmachine: (newest-cni-355657)   <os>
	I0722 12:11:13.363611   66110 main.go:141] libmachine: (newest-cni-355657)     <type>hvm</type>
	I0722 12:11:13.363622   66110 main.go:141] libmachine: (newest-cni-355657)     <boot dev='cdrom'/>
	I0722 12:11:13.363631   66110 main.go:141] libmachine: (newest-cni-355657)     <boot dev='hd'/>
	I0722 12:11:13.363641   66110 main.go:141] libmachine: (newest-cni-355657)     <bootmenu enable='no'/>
	I0722 12:11:13.363651   66110 main.go:141] libmachine: (newest-cni-355657)   </os>
	I0722 12:11:13.363660   66110 main.go:141] libmachine: (newest-cni-355657)   <devices>
	I0722 12:11:13.363671   66110 main.go:141] libmachine: (newest-cni-355657)     <disk type='file' device='cdrom'>
	I0722 12:11:13.363687   66110 main.go:141] libmachine: (newest-cni-355657)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657/boot2docker.iso'/>
	I0722 12:11:13.363698   66110 main.go:141] libmachine: (newest-cni-355657)       <target dev='hdc' bus='scsi'/>
	I0722 12:11:13.363707   66110 main.go:141] libmachine: (newest-cni-355657)       <readonly/>
	I0722 12:11:13.363714   66110 main.go:141] libmachine: (newest-cni-355657)     </disk>
	I0722 12:11:13.363725   66110 main.go:141] libmachine: (newest-cni-355657)     <disk type='file' device='disk'>
	I0722 12:11:13.363738   66110 main.go:141] libmachine: (newest-cni-355657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 12:11:13.363755   66110 main.go:141] libmachine: (newest-cni-355657)       <source file='/home/jenkins/minikube-integration/19313-5960/.minikube/machines/newest-cni-355657/newest-cni-355657.rawdisk'/>
	I0722 12:11:13.363766   66110 main.go:141] libmachine: (newest-cni-355657)       <target dev='hda' bus='virtio'/>
	I0722 12:11:13.363778   66110 main.go:141] libmachine: (newest-cni-355657)     </disk>
	I0722 12:11:13.363789   66110 main.go:141] libmachine: (newest-cni-355657)     <interface type='network'>
	I0722 12:11:13.363801   66110 main.go:141] libmachine: (newest-cni-355657)       <source network='mk-newest-cni-355657'/>
	I0722 12:11:13.363812   66110 main.go:141] libmachine: (newest-cni-355657)       <model type='virtio'/>
	I0722 12:11:13.363822   66110 main.go:141] libmachine: (newest-cni-355657)     </interface>
	I0722 12:11:13.363832   66110 main.go:141] libmachine: (newest-cni-355657)     <interface type='network'>
	I0722 12:11:13.363841   66110 main.go:141] libmachine: (newest-cni-355657)       <source network='default'/>
	I0722 12:11:13.363862   66110 main.go:141] libmachine: (newest-cni-355657)       <model type='virtio'/>
	I0722 12:11:13.363892   66110 main.go:141] libmachine: (newest-cni-355657)     </interface>
	I0722 12:11:13.363912   66110 main.go:141] libmachine: (newest-cni-355657)     <serial type='pty'>
	I0722 12:11:13.363924   66110 main.go:141] libmachine: (newest-cni-355657)       <target port='0'/>
	I0722 12:11:13.363932   66110 main.go:141] libmachine: (newest-cni-355657)     </serial>
	I0722 12:11:13.363942   66110 main.go:141] libmachine: (newest-cni-355657)     <console type='pty'>
	I0722 12:11:13.363950   66110 main.go:141] libmachine: (newest-cni-355657)       <target type='serial' port='0'/>
	I0722 12:11:13.363962   66110 main.go:141] libmachine: (newest-cni-355657)     </console>
	I0722 12:11:13.363971   66110 main.go:141] libmachine: (newest-cni-355657)     <rng model='virtio'>
	I0722 12:11:13.363984   66110 main.go:141] libmachine: (newest-cni-355657)       <backend model='random'>/dev/random</backend>
	I0722 12:11:13.363992   66110 main.go:141] libmachine: (newest-cni-355657)     </rng>
	I0722 12:11:13.364000   66110 main.go:141] libmachine: (newest-cni-355657)     
	I0722 12:11:13.364009   66110 main.go:141] libmachine: (newest-cni-355657)     
	I0722 12:11:13.364018   66110 main.go:141] libmachine: (newest-cni-355657)   </devices>
	I0722 12:11:13.364028   66110 main.go:141] libmachine: (newest-cni-355657) </domain>
	I0722 12:11:13.364043   66110 main.go:141] libmachine: (newest-cni-355657) 
	I0722 12:11:13.368358   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:5e:25:2d in network default
	I0722 12:11:13.369166   66110 main.go:141] libmachine: (newest-cni-355657) Ensuring networks are active...
	I0722 12:11:13.369190   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:13.369900   66110 main.go:141] libmachine: (newest-cni-355657) Ensuring network default is active
	I0722 12:11:13.370193   66110 main.go:141] libmachine: (newest-cni-355657) Ensuring network mk-newest-cni-355657 is active
	I0722 12:11:13.370689   66110 main.go:141] libmachine: (newest-cni-355657) Getting domain xml...
	I0722 12:11:13.371459   66110 main.go:141] libmachine: (newest-cni-355657) Creating domain...
	I0722 12:11:14.615291   66110 main.go:141] libmachine: (newest-cni-355657) Waiting to get IP...
	I0722 12:11:14.616221   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:14.616814   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:14.616852   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:14.616664   66134 retry.go:31] will retry after 225.40681ms: waiting for machine to come up
	I0722 12:11:14.844050   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:14.844506   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:14.844534   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:14.844466   66134 retry.go:31] will retry after 383.616919ms: waiting for machine to come up
	I0722 12:11:15.230111   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:15.230571   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:15.230598   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:15.230528   66134 retry.go:31] will retry after 448.702088ms: waiting for machine to come up
	I0722 12:11:15.681173   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:15.681701   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:15.681727   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:15.681659   66134 retry.go:31] will retry after 526.177558ms: waiting for machine to come up
	I0722 12:11:16.209340   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:16.209805   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:16.209827   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:16.209764   66134 retry.go:31] will retry after 546.629864ms: waiting for machine to come up
	I0722 12:11:16.757655   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:16.758124   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:16.758154   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:16.758082   66134 retry.go:31] will retry after 639.512009ms: waiting for machine to come up
	I0722 12:11:17.398850   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:17.399359   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:17.399389   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:17.399302   66134 retry.go:31] will retry after 1.048845619s: waiting for machine to come up
	I0722 12:11:18.449940   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:18.450411   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:18.450439   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:18.450341   66134 retry.go:31] will retry after 1.094781737s: waiting for machine to come up
	I0722 12:11:19.546971   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:19.547510   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:19.547539   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:19.547481   66134 retry.go:31] will retry after 1.759143448s: waiting for machine to come up
	I0722 12:11:21.308019   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:21.308491   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:21.308518   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:21.308449   66134 retry.go:31] will retry after 1.45367421s: waiting for machine to come up
	I0722 12:11:22.764058   66110 main.go:141] libmachine: (newest-cni-355657) DBG | domain newest-cni-355657 has defined MAC address 52:54:00:56:80:8e in network mk-newest-cni-355657
	I0722 12:11:22.764573   66110 main.go:141] libmachine: (newest-cni-355657) DBG | unable to find current IP address of domain newest-cni-355657 in network mk-newest-cni-355657
	I0722 12:11:22.764595   66110 main.go:141] libmachine: (newest-cni-355657) DBG | I0722 12:11:22.764536   66134 retry.go:31] will retry after 1.839534153s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.055098555Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:659da6d7fe6765a1b7a5bb727b9ca8707d4d34bed1a43476cd6b2d595c30305c,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-9vzx2,Uid:bb2ae44c-3190-4025-8f2e-e236c52da27e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649436807839373,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-9vzx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2ae44c-3190-4025-8f2e-e236c52da27e,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T11:57:16.474084475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-xxf6t,Uid:6e933cad-a95a-47c4-b8b9-89205619fb
70,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649436525143613,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T11:57:16.207260310Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-vg4wp,Uid:3556f321-9c0a-437f-a06e-4eca4b07781d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649436522300918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-07-22T11:57:16.198993745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f56d91d7-a252-485d-936d-3f44804d26ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649436466156809,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-22T11:57:16.155938594Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&PodSandboxMetadata{Name:kube-proxy-b5xwg,Uid:6ec19ad2-170e-4402-bcb7-ebf14a2537ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649434875376822,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T11:57:14.569134428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-339929,Uid:c4ff9d431109c2f52e7587ade669ddf2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649424111959375,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.112:2379,kubernetes.io/config.hash: c4ff9d431109c2f52e7587ade669ddf2,kubernetes.io/config.seen: 2024-07-22T11:57:03.662372017Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Met
adata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-339929,Uid:7eef2f9dd45be154ce4a9790165b4dbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649424102955601,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7eef2f9dd45be154ce4a9790165b4dbe,kubernetes.io/config.seen: 2024-07-22T11:57:03.662377217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-339929,Uid:7ea1153371c27970571c21f4e38f3274,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721649424101228870,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ea1153371c27970571c21f4e38f3274,kubernetes.io/config.seen: 2024-07-22T11:57:03.662376363Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-339929,Uid:8d6a7fff67c30794e5777b214671c482,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721649424098522374,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.112:844
3,kubernetes.io/config.hash: 8d6a7fff67c30794e5777b214671c482,kubernetes.io/config.seen: 2024-07-22T11:57:03.662375088Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-339929,Uid:8d6a7fff67c30794e5777b214671c482,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721649136224879103,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.112:8443,kubernetes.io/config.hash: 8d6a7fff67c30794e5777b214671c482,kubernetes.io/config.seen: 2024-07-22T11:52:15.730729708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=bfc59216-2a76-4e11-b24a-0fd0c61df45b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.055722229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b234623-05b2-4172-b698-148804c1ccc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.055785075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b234623-05b2-4172-b698-148804c1ccc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.055964665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b234623-05b2-4172-b698-148804c1ccc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.091756109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c029a63-ed99-4bc5-ba4d-013d2a884b69 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.091852368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c029a63-ed99-4bc5-ba4d-013d2a884b69 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.093309875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2455e743-9307-4e53-aef4-1df243e393a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.093681209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650287093660655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2455e743-9307-4e53-aef4-1df243e393a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.094313392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=574a0ed4-afe1-472c-b83b-d31ecf499b6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.094380981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574a0ed4-afe1-472c-b83b-d31ecf499b6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.094628482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=574a0ed4-afe1-472c-b83b-d31ecf499b6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.135432273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d864595-a284-4acd-a983-78af47c3c36d name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.135520245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d864595-a284-4acd-a983-78af47c3c36d name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.136436170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d171c331-ad53-40bf-af4a-c5c8f8720ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.136850015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650287136829130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d171c331-ad53-40bf-af4a-c5c8f8720ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.137277382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97536767-8b97-4559-a2da-52b3e0fad010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.137349545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97536767-8b97-4559-a2da-52b3e0fad010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.137537164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97536767-8b97-4559-a2da-52b3e0fad010 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.176756041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=896b38f3-a422-48be-9f52-a57fc3275ce8 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.176854240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=896b38f3-a422-48be-9f52-a57fc3275ce8 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.178916167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c87063f4-1f6c-436d-bbde-d4179f08597c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.179447581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650287179329053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c87063f4-1f6c-436d-bbde-d4179f08597c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.181627897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7fa4f02-90b6-483b-8f83-4f8d2a00a234 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.181725743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7fa4f02-90b6-483b-8f83-4f8d2a00a234 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:27 no-preload-339929 crio[740]: time="2024-07-22 12:11:27.181996678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae,PodSandboxId:ae585ea000cb2ce0ea120a3ded77b1806634b3475c71f00436611c9daf327612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437235440759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-xxf6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e933cad-a95a-47c4-b8b9-89205619fb70,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f,PodSandboxId:8d80b05b44b9721479e2e2c9005fabd42108cf30979547bf56fd71477f585975,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721649437043061939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-vg4wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3556f321-9c0a-437f-a06e-4eca4b07781d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757,PodSandboxId:c390506aa48d8e46d671e5a76f5160d34bd5c9623758f3e396cd9a93dc2d2916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721649436722894912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d91d7-a252-485d-936d-3f44804d26ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de,PodSandboxId:70d9710be7f5c7b3728cc04c57e3b33dc724fd5be3fcb9cda81c5c885a3dd6fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721649434968855963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5xwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec19ad2-170e-4402-bcb7-ebf14a2537ce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa,PodSandboxId:c7d3ab04f4ed965cbb96a67d3bd8e173b10686962b2b39f7667ad02b70a312e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721649424390244154,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef2f9dd45be154ce4a9790165b4dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223,PodSandboxId:02e56cd7d11112913ccacc24721fb70943597a249b56c7e8e933af60a648dc09,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721649424377862481,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4ff9d431109c2f52e7587ade669ddf2,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951,PodSandboxId:60c7105c285b6d1923be5b5da90a37f81d9a39410ae2b930ec03a914ee64170e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721649424310049408,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea1153371c27970571c21f4e38f3274,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34,PodSandboxId:06b0c7bfff953bff86ae346836288acaf08a9e921ef6d33f623301318c876570,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721649424258460009,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98,PodSandboxId:6b62e6b7f0e4823ef05a6a6914f78361d042e916188d35ae64bc247204561d60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721649136433121431,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-339929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a7fff67c30794e5777b214671c482,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7fa4f02-90b6-483b-8f83-4f8d2a00a234 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1a39adf6b9e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   ae585ea000cb2       coredns-5cfdc65f69-xxf6t
	376a436fd8b89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   8d80b05b44b97       coredns-5cfdc65f69-vg4wp
	dba8b852f9942       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c390506aa48d8       storage-provisioner
	ad6a274fac983       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   70d9710be7f5c       kube-proxy-b5xwg
	af66c67a58cf0       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   c7d3ab04f4ed9       kube-scheduler-no-preload-339929
	b8434d25d9dec       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   02e56cd7d1111       etcd-no-preload-339929
	8e15950675152       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   60c7105c285b6       kube-controller-manager-no-preload-339929
	84a05a56db34e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   06b0c7bfff953       kube-apiserver-no-preload-339929
	247e869804e35       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   6b62e6b7f0e48       kube-apiserver-no-preload-339929
	
	
	==> coredns [376a436fd8b890963a66d8a99735988693b079dc2a956af2126f7869f0053e0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a1a39adf6b9e9378c9695949b0ed79aad89e6211b84d35cad4ab7c29f3da22ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-339929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-339929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=no-preload-339929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:57:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-339929
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 12:11:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 12:07:33 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 12:07:33 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 12:07:33 +0000   Mon, 22 Jul 2024 11:57:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 12:07:33 +0000   Mon, 22 Jul 2024 11:57:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.112
	  Hostname:    no-preload-339929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4276fd8212f54a07afb517aee0ecb30d
	  System UUID:                4276fd82-12f5-4a07-afb5-17aee0ecb30d
	  Boot ID:                    dc98608e-1eaf-4e96-a621-04b1c3b629ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-vg4wp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-xxf6t                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-339929                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-339929             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-339929    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-b5xwg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-339929             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-9vzx2              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-339929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-339929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-339929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-339929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-339929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-339929 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-339929 event: Registered Node no-preload-339929 in Controller
	
	
	==> dmesg <==
	[  +0.052332] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041508] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807696] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634492] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837731] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.071038] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066485] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.175944] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.137635] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +0.313918] systemd-fstab-generator[726]: Ignoring "noauto" option for root device
	[Jul22 11:52] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.061140] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.669404] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +4.624692] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.802160] kauditd_printk_skb: 90 callbacks suppressed
	[Jul22 11:57] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +0.064034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.174751] kauditd_printk_skb: 52 callbacks suppressed
	[  +1.815252] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +5.405201] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.509533] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +4.632777] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [b8434d25d9dec778493e014a1223688e94d748f85f1fa62621775f2fcfe0d223] <==
	{"level":"info","ts":"2024-07-22T11:57:04.749645Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.112:2380"}
	{"level":"info","ts":"2024-07-22T11:57:04.749669Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.112:2380"}
	{"level":"info","ts":"2024-07-22T11:57:05.413607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.413669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.413695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 received MsgPreVoteResp from d72958ff42397886 at term 1"}
	{"level":"info","ts":"2024-07-22T11:57:05.41371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 received MsgVoteResp from d72958ff42397886 at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d72958ff42397886 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.413732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d72958ff42397886 elected leader d72958ff42397886 at term 2"}
	{"level":"info","ts":"2024-07-22T11:57:05.417747Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d72958ff42397886","local-member-attributes":"{Name:no-preload-339929 ClientURLs:[https://192.168.61.112:2379]}","request-path":"/0/members/d72958ff42397886/attributes","cluster-id":"f21c4f9090188b3d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:57:05.417917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:57:05.418013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:57:05.418426Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.421647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:57:05.425364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.112:2379"}
	{"level":"info","ts":"2024-07-22T11:57:05.425515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:57:05.425631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T11:57:05.426149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T11:57:05.429182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f21c4f9090188b3d","local-member-id":"d72958ff42397886","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.429386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.429428Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:57:05.433848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T12:07:05.492715Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-07-22T12:07:05.502415Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":724,"took":"9.284897ms","hash":2866141277,"current-db-size-bytes":2260992,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2260992,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-22T12:07:05.502474Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2866141277,"revision":724,"compact-revision":-1}
	
	
	==> kernel <==
	 12:11:27 up 19 min,  0 users,  load average: 0.08, 0.05, 0.06
	Linux no-preload-339929 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [247e869804e354d29ecaf61281da8f07a64cc6d207d5b43ac5df5b2d3a916b98] <==
	W0722 11:56:56.420769       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.420806       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.455761       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.489505       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.492189       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.507820       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.602964       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.613506       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.649080       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.684276       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.731347       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.763728       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.786248       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.906648       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.909246       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:56.910613       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.045916       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.217026       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.482248       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.504011       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:56:57.649835       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.094965       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.239805       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.244276       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 11:57:01.250178       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [84a05a56db34efbd1c14d9d20cefc96297b98e7e0ef0ec4f9a85f9e4b5d28d34] <==
	W0722 12:07:08.174400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:07:08.174498       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0722 12:07:08.175447       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:07:08.176616       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:08:08.175922       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:08:08.175998       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0722 12:08:08.177213       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:08:08.177286       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0722 12:08:08.177327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:08:08.178446       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 12:10:08.178295       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:10:08.178411       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0722 12:10:08.179639       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 12:10:08.179755       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0722 12:10:08.179826       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 12:10:08.180913       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8e15950675152888a4a35729c4224dc87dbbc417db28cb8168f88c26f738b951] <==
	E0722 12:06:15.309468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:06:15.345360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:06:45.317740       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:06:45.353423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:07:15.324532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:07:15.361689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:07:33.222440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-339929"
	E0722 12:07:45.330916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:07:45.369277       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:08:15.339439       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:08:15.378842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 12:08:26.889755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="264.75µs"
	I0722 12:08:38.912156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="67.618µs"
	E0722 12:08:45.346061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:08:45.387442       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:09:15.353193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:09:15.397199       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:09:45.359224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:09:45.405225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:15.366377       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:10:15.412429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:10:45.373739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:10:45.420905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 12:11:15.382173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 12:11:15.429082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ad6a274fac983e3f03f7a8571a5c40733e0fa8c1af7ffa5124cac7eeedb178de] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 11:57:15.155860       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 11:57:15.167115       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.112"]
	E0722 11:57:15.167189       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 11:57:15.201695       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 11:57:15.201742       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 11:57:15.201775       1 server_linux.go:170] "Using iptables Proxier"
	I0722 11:57:15.204199       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 11:57:15.204456       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 11:57:15.204482       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:57:15.205873       1 config.go:197] "Starting service config controller"
	I0722 11:57:15.205904       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 11:57:15.205925       1 config.go:104] "Starting endpoint slice config controller"
	I0722 11:57:15.205929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 11:57:15.206491       1 config.go:326] "Starting node config controller"
	I0722 11:57:15.206628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 11:57:15.307643       1 shared_informer.go:320] Caches are synced for node config
	I0722 11:57:15.307673       1 shared_informer.go:320] Caches are synced for service config
	I0722 11:57:15.307693       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [af66c67a58cf0c017b01f074f98cf9283faf228c0b838fc7d4aa110b04c08ffa] <==
	W0722 11:57:07.242179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:57:07.242206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.242221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 11:57:07.242228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.242256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:57:07.242264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:07.252162       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:57:07.252217       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 11:57:08.082868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 11:57:08.082989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.117756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:57:08.117859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.252428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:57:08.252522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.263365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 11:57:08.263455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.378150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:57:08.378249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.389071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 11:57:08.389176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.403983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 11:57:08.404090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 11:57:08.445229       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:57:08.445452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0722 11:57:11.381119       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 12:09:09 no-preload-339929 kubelet[3291]: E0722 12:09:09.901371    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:09:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:09:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:09:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:09:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:09:22 no-preload-339929 kubelet[3291]: E0722 12:09:22.869457    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:09:37 no-preload-339929 kubelet[3291]: E0722 12:09:37.870886    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:09:51 no-preload-339929 kubelet[3291]: E0722 12:09:51.871079    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:10:03 no-preload-339929 kubelet[3291]: E0722 12:10:03.869799    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:10:09 no-preload-339929 kubelet[3291]: E0722 12:10:09.898858    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:10:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:10:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:10:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:10:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:10:17 no-preload-339929 kubelet[3291]: E0722 12:10:17.870311    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:10:31 no-preload-339929 kubelet[3291]: E0722 12:10:31.871719    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:10:44 no-preload-339929 kubelet[3291]: E0722 12:10:44.869260    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:10:55 no-preload-339929 kubelet[3291]: E0722 12:10:55.872004    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:11:08 no-preload-339929 kubelet[3291]: E0722 12:11:08.870464    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	Jul 22 12:11:09 no-preload-339929 kubelet[3291]: E0722 12:11:09.899950    3291 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 12:11:09 no-preload-339929 kubelet[3291]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 12:11:09 no-preload-339929 kubelet[3291]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 12:11:09 no-preload-339929 kubelet[3291]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 12:11:09 no-preload-339929 kubelet[3291]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 12:11:22 no-preload-339929 kubelet[3291]: E0722 12:11:22.869894    3291 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-9vzx2" podUID="bb2ae44c-3190-4025-8f2e-e236c52da27e"
	
	
	==> storage-provisioner [dba8b852f9942c9818af636892aa747f89f3169141e374be66b6112779e5c757] <==
	I0722 11:57:16.955678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:57:16.997648       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:57:16.997903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:57:17.018926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:57:17.019718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86bd175a-a12f-46c6-806b-7eb3378e0317", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5 became leader
	I0722 11:57:17.019777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5!
	I0722 11:57:17.122753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-339929_9b4e3ba2-b157-4f1c-a8c3-255cbfe7abd5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-339929 -n no-preload-339929
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-339929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-9vzx2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2: exit status 1 (60.387894ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-9vzx2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-339929 describe pod metrics-server-78fcd8795b-9vzx2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (298.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (163s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0722 12:08:29.087657   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (217.991116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-101261" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-101261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-101261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.03µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-101261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (215.75539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-101261 logs -n 25: (1.529430491s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:42 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-339929             | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-339929                                   | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-467176                              | cert-expiration-467176       | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:43 UTC |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:43 UTC | 22 Jul 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-802149            | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-651148                           | kubernetes-upgrade-651148    | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737017 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:44 UTC |
	|         | disable-driver-mounts-737017                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:44 UTC | 22 Jul 24 11:46 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-101261        | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-339929                  | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-339929 --memory=2200                     | no-preload-339929            | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:57 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-605740  | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC | 22 Jul 24 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-802149                 | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-802149                                  | embed-certs-802149           | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-101261             | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC | 22 Jul 24 11:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-101261                              | old-k8s-version-101261       | jenkins | v1.33.1 | 22 Jul 24 11:47 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-605740       | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-605740 | jenkins | v1.33.1 | 22 Jul 24 11:49 UTC | 22 Jul 24 11:57 UTC |
	|         | default-k8s-diff-port-605740                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 11:49:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 11:49:15.771364   60225 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:49:15.771757   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.771777   60225 out.go:304] Setting ErrFile to fd 2...
	I0722 11:49:15.771784   60225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:49:15.772270   60225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:49:15.773178   60225 out.go:298] Setting JSON to false
	I0722 11:49:15.774093   60225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5508,"bootTime":1721643448,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:49:15.774158   60225 start.go:139] virtualization: kvm guest
	I0722 11:49:15.776078   60225 out.go:177] * [default-k8s-diff-port-605740] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:49:15.777632   60225 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:49:15.777656   60225 notify.go:220] Checking for updates...
	I0722 11:49:15.780016   60225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:49:15.781179   60225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:49:15.782401   60225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:49:15.783538   60225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:49:15.784660   60225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:49:15.786153   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:49:15.786546   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.786580   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.801130   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0722 11:49:15.801454   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.802000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.802022   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.802343   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.802519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.802785   60225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:49:15.803097   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:49:15.803130   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:49:15.817222   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0722 11:49:15.817616   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:49:15.818025   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:49:15.818050   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:49:15.818316   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:49:15.818457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:49:15.851885   60225 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 11:49:15.853142   60225 start.go:297] selected driver: kvm2
	I0722 11:49:15.853162   60225 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.853293   60225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:49:15.854178   60225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.854267   60225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 11:49:15.869086   60225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 11:49:15.869437   60225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:49:15.869496   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:49:15.869510   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:49:15.869553   60225 start.go:340] cluster config:
	{Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:49:15.869650   60225 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 11:49:15.871443   60225 out.go:177] * Starting "default-k8s-diff-port-605740" primary control-plane node in "default-k8s-diff-port-605740" cluster
	I0722 11:49:18.708660   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:15.872666   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:49:15.872712   60225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 11:49:15.872722   60225 cache.go:56] Caching tarball of preloaded images
	I0722 11:49:15.872822   60225 preload.go:172] Found /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 11:49:15.872836   60225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 11:49:15.872964   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:49:15.873188   60225 start.go:360] acquireMachinesLock for default-k8s-diff-port-605740: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:49:21.780635   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:27.860643   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:30.932670   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:37.012663   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:40.084620   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:46.164558   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:49.236597   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:55.316683   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:49:58.388708   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:04.468652   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:07.540692   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:13.620745   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:16.692661   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:22.772655   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:25.844570   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:31.924648   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:34.996632   58921 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.112:22: connect: no route to host
	I0722 11:50:38.000554   59477 start.go:364] duration metric: took 3m13.232713685s to acquireMachinesLock for "embed-certs-802149"
	I0722 11:50:38.000603   59477 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:38.000609   59477 fix.go:54] fixHost starting: 
	I0722 11:50:38.000916   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:38.000945   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:38.015673   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0722 11:50:38.016063   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:38.016570   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:50:38.016599   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:38.016926   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:38.017123   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:38.017256   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:50:38.018766   59477 fix.go:112] recreateIfNeeded on embed-certs-802149: state=Stopped err=<nil>
	I0722 11:50:38.018787   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	W0722 11:50:38.018925   59477 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:38.020306   59477 out.go:177] * Restarting existing kvm2 VM for "embed-certs-802149" ...
	I0722 11:50:38.021405   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Start
	I0722 11:50:38.021569   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring networks are active...
	I0722 11:50:38.022209   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network default is active
	I0722 11:50:38.022492   59477 main.go:141] libmachine: (embed-certs-802149) Ensuring network mk-embed-certs-802149 is active
	I0722 11:50:38.022753   59477 main.go:141] libmachine: (embed-certs-802149) Getting domain xml...
	I0722 11:50:38.023364   59477 main.go:141] libmachine: (embed-certs-802149) Creating domain...
	I0722 11:50:39.205696   59477 main.go:141] libmachine: (embed-certs-802149) Waiting to get IP...
	I0722 11:50:39.206555   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.206928   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.207002   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.206893   60553 retry.go:31] will retry after 250.927989ms: waiting for machine to come up
	I0722 11:50:39.459432   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.459909   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.459938   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.459862   60553 retry.go:31] will retry after 277.950273ms: waiting for machine to come up
	I0722 11:50:37.998282   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:37.998320   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998616   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:50:37.998638   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:50:37.998852   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:50:38.000410   58921 machine.go:97] duration metric: took 4m37.434000152s to provisionDockerMachine
	I0722 11:50:38.000456   58921 fix.go:56] duration metric: took 4m37.453731858s for fixHost
	I0722 11:50:38.000466   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 4m37.453770575s
	W0722 11:50:38.000487   58921 start.go:714] error starting host: provision: host is not running
	W0722 11:50:38.000589   58921 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 11:50:38.000597   58921 start.go:729] Will try again in 5 seconds ...
	I0722 11:50:39.739339   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:39.739770   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:39.739799   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:39.739724   60553 retry.go:31] will retry after 367.4788ms: waiting for machine to come up
	I0722 11:50:40.109153   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.109568   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.109598   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.109518   60553 retry.go:31] will retry after 599.052603ms: waiting for machine to come up
	I0722 11:50:40.709866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:40.710342   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:40.710375   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:40.710299   60553 retry.go:31] will retry after 469.478286ms: waiting for machine to come up
	I0722 11:50:41.180930   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.181348   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.181370   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.181302   60553 retry.go:31] will retry after 690.713081ms: waiting for machine to come up
	I0722 11:50:41.873801   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:41.874158   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:41.874182   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:41.874106   60553 retry.go:31] will retry after 828.336067ms: waiting for machine to come up
	I0722 11:50:42.703984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:42.704401   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:42.704422   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:42.704340   60553 retry.go:31] will retry after 1.22368693s: waiting for machine to come up
	I0722 11:50:43.929406   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:43.929866   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:43.929896   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:43.929838   60553 retry.go:31] will retry after 1.809806439s: waiting for machine to come up
	I0722 11:50:43.002990   58921 start.go:360] acquireMachinesLock for no-preload-339929: {Name:mkb47562c13010c82c8cc12e8c5b700833b1d9dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 11:50:45.741657   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:45.742012   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:45.742034   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:45.741979   60553 retry.go:31] will retry after 2.216041266s: waiting for machine to come up
	I0722 11:50:47.959511   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:47.959979   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:47.960003   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:47.959919   60553 retry.go:31] will retry after 2.278973432s: waiting for machine to come up
	I0722 11:50:50.241992   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:50.242399   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:50.242413   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:50.242377   60553 retry.go:31] will retry after 2.533863574s: waiting for machine to come up
	I0722 11:50:52.779222   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:52.779627   59477 main.go:141] libmachine: (embed-certs-802149) DBG | unable to find current IP address of domain embed-certs-802149 in network mk-embed-certs-802149
	I0722 11:50:52.779661   59477 main.go:141] libmachine: (embed-certs-802149) DBG | I0722 11:50:52.779579   60553 retry.go:31] will retry after 3.004874532s: waiting for machine to come up
	I0722 11:50:57.057071   59674 start.go:364] duration metric: took 3m21.54200658s to acquireMachinesLock for "old-k8s-version-101261"
	I0722 11:50:57.057128   59674 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:50:57.057138   59674 fix.go:54] fixHost starting: 
	I0722 11:50:57.057543   59674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:50:57.057575   59674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:50:57.073788   59674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0722 11:50:57.074103   59674 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:50:57.074561   59674 main.go:141] libmachine: Using API Version  1
	I0722 11:50:57.074582   59674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:50:57.074903   59674 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:50:57.075091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:50:57.075225   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetState
	I0722 11:50:57.076587   59674 fix.go:112] recreateIfNeeded on old-k8s-version-101261: state=Stopped err=<nil>
	I0722 11:50:57.076607   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	W0722 11:50:57.076745   59674 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:50:57.079659   59674 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-101261" ...
	I0722 11:50:55.787998   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788533   59477 main.go:141] libmachine: (embed-certs-802149) Found IP for machine: 192.168.72.113
	I0722 11:50:55.788556   59477 main.go:141] libmachine: (embed-certs-802149) Reserving static IP address...
	I0722 11:50:55.788567   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has current primary IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.788933   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.788954   59477 main.go:141] libmachine: (embed-certs-802149) DBG | skip adding static IP to network mk-embed-certs-802149 - found existing host DHCP lease matching {name: "embed-certs-802149", mac: "52:54:00:ce:af:8a", ip: "192.168.72.113"}
	I0722 11:50:55.788965   59477 main.go:141] libmachine: (embed-certs-802149) Reserved static IP address: 192.168.72.113
	I0722 11:50:55.788974   59477 main.go:141] libmachine: (embed-certs-802149) Waiting for SSH to be available...
	I0722 11:50:55.788984   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Getting to WaitForSSH function...
	I0722 11:50:55.791252   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791573   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.791597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.791699   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH client type: external
	I0722 11:50:55.791735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa (-rw-------)
	I0722 11:50:55.791758   59477 main.go:141] libmachine: (embed-certs-802149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:50:55.791768   59477 main.go:141] libmachine: (embed-certs-802149) DBG | About to run SSH command:
	I0722 11:50:55.791776   59477 main.go:141] libmachine: (embed-certs-802149) DBG | exit 0
	I0722 11:50:55.916215   59477 main.go:141] libmachine: (embed-certs-802149) DBG | SSH cmd err, output: <nil>: 
	I0722 11:50:55.916575   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetConfigRaw
	I0722 11:50:55.917177   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:55.919429   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.919723   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.919755   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.920020   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/config.json ...
	I0722 11:50:55.920227   59477 machine.go:94] provisionDockerMachine start ...
	I0722 11:50:55.920249   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:55.920461   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:55.922469   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922731   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:55.922756   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:55.922887   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:55.923063   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923205   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:55.923340   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:55.923492   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:55.923698   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:55.923712   59477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:50:56.032434   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:50:56.032465   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032684   59477 buildroot.go:166] provisioning hostname "embed-certs-802149"
	I0722 11:50:56.032712   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.032892   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.035477   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035797   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.035826   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.035969   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.036126   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036288   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.036426   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.036649   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.036806   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.036818   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-802149 && echo "embed-certs-802149" | sudo tee /etc/hostname
	I0722 11:50:56.158574   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-802149
	
	I0722 11:50:56.158609   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.161390   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161780   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.161812   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.161978   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.162246   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162444   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.162593   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.162793   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.162965   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.162983   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-802149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-802149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-802149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:50:56.281386   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:50:56.281421   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:50:56.281454   59477 buildroot.go:174] setting up certificates
	I0722 11:50:56.281470   59477 provision.go:84] configureAuth start
	I0722 11:50:56.281487   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetMachineName
	I0722 11:50:56.281781   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:56.284122   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284438   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.284468   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.284549   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.286400   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286806   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.286835   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.286962   59477 provision.go:143] copyHostCerts
	I0722 11:50:56.287027   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:50:56.287038   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:50:56.287102   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:50:56.287205   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:50:56.287214   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:50:56.287241   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:50:56.287297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:50:56.287304   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:50:56.287326   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:50:56.287372   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.embed-certs-802149 san=[127.0.0.1 192.168.72.113 embed-certs-802149 localhost minikube]
	I0722 11:50:56.388618   59477 provision.go:177] copyRemoteCerts
	I0722 11:50:56.388666   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:50:56.388689   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.391149   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391436   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.391460   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.391656   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.391810   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.391928   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.392068   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.474640   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:50:56.497641   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:50:56.519444   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:50:56.541351   59477 provision.go:87] duration metric: took 259.857731ms to configureAuth
	I0722 11:50:56.541381   59477 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:50:56.541543   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:50:56.541625   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.544154   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544682   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.544718   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.544922   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.545125   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545301   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.545427   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.545653   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.545828   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.545844   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:50:56.811690   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:50:56.811726   59477 machine.go:97] duration metric: took 891.484788ms to provisionDockerMachine
	I0722 11:50:56.811740   59477 start.go:293] postStartSetup for "embed-certs-802149" (driver="kvm2")
	I0722 11:50:56.811772   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:50:56.811791   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:56.812107   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:50:56.812137   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.814602   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815007   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.815032   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.815143   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.815380   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.815566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.815746   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:56.904332   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:50:56.908423   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:50:56.908451   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:50:56.908508   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:50:56.908587   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:50:56.908680   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:50:56.919264   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:50:56.943783   59477 start.go:296] duration metric: took 132.033326ms for postStartSetup
	I0722 11:50:56.943814   59477 fix.go:56] duration metric: took 18.943205526s for fixHost
	I0722 11:50:56.943833   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:56.946256   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946547   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:56.946575   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:56.946732   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:56.946929   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947082   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:56.947188   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:56.947356   59477 main.go:141] libmachine: Using SSH client type: native
	I0722 11:50:56.947518   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0722 11:50:56.947528   59477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:50:57.056893   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649057.031410961
	
	I0722 11:50:57.056927   59477 fix.go:216] guest clock: 1721649057.031410961
	I0722 11:50:57.056936   59477 fix.go:229] Guest: 2024-07-22 11:50:57.031410961 +0000 UTC Remote: 2024-07-22 11:50:56.943818166 +0000 UTC m=+212.308172183 (delta=87.592795ms)
	I0722 11:50:57.056961   59477 fix.go:200] guest clock delta is within tolerance: 87.592795ms
	I0722 11:50:57.056970   59477 start.go:83] releasing machines lock for "embed-certs-802149", held for 19.056384178s
	I0722 11:50:57.057002   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.057268   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:57.059965   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060412   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.060443   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.060671   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061167   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061345   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:50:57.061428   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:50:57.061479   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.061561   59477 ssh_runner.go:195] Run: cat /version.json
	I0722 11:50:57.061586   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:50:57.064433   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064735   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.064856   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.064879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065018   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065118   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:57.065143   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:57.065201   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065298   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:50:57.065408   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065481   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:50:57.065556   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.065624   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:50:57.065770   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:50:57.167044   59477 ssh_runner.go:195] Run: systemctl --version
	I0722 11:50:57.172714   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:50:57.313674   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:50:57.319474   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:50:57.319535   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:50:57.335011   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:50:57.335031   59477 start.go:495] detecting cgroup driver to use...
	I0722 11:50:57.335093   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:50:57.351191   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:50:57.365322   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:50:57.365376   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:50:57.379264   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:50:57.393946   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:50:57.510830   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:50:57.687208   59477 docker.go:233] disabling docker service ...
	I0722 11:50:57.687269   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:50:57.703909   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:50:57.717812   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:50:57.855988   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:50:57.973911   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:50:57.988891   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:50:58.007784   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:50:58.007841   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.019588   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:50:58.019649   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.030056   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.042635   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.053368   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:50:58.064180   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.074677   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.092573   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:50:58.103630   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:50:58.114065   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:50:58.114131   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:50:58.128769   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:50:58.139226   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:50:58.301342   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:50:58.455996   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:50:58.456085   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:50:58.460904   59477 start.go:563] Will wait 60s for crictl version
	I0722 11:50:58.460969   59477 ssh_runner.go:195] Run: which crictl
	I0722 11:50:58.464918   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:50:58.501783   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:50:58.501867   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.529010   59477 ssh_runner.go:195] Run: crio --version
	I0722 11:50:58.566811   59477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:50:58.568309   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetIP
	I0722 11:50:58.571088   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571594   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:50:58.571620   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:50:58.571813   59477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 11:50:58.575927   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:50:58.589002   59477 kubeadm.go:883] updating cluster {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:50:58.589126   59477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:50:58.589187   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:50:58.625716   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:50:58.625836   59477 ssh_runner.go:195] Run: which lz4
	I0722 11:50:58.629760   59477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:50:58.634037   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:50:58.634070   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:50:57.080830   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .Start
	I0722 11:50:57.080987   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring networks are active...
	I0722 11:50:57.081647   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network default is active
	I0722 11:50:57.081955   59674 main.go:141] libmachine: (old-k8s-version-101261) Ensuring network mk-old-k8s-version-101261 is active
	I0722 11:50:57.082277   59674 main.go:141] libmachine: (old-k8s-version-101261) Getting domain xml...
	I0722 11:50:57.083008   59674 main.go:141] libmachine: (old-k8s-version-101261) Creating domain...
	I0722 11:50:58.331212   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting to get IP...
	I0722 11:50:58.332090   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.332510   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.332594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.332505   60690 retry.go:31] will retry after 310.971479ms: waiting for machine to come up
	I0722 11:50:58.645391   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:58.645871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:58.645898   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:58.645841   60690 retry.go:31] will retry after 371.739884ms: waiting for machine to come up
	I0722 11:50:59.019622   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.020229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.020258   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.020202   60690 retry.go:31] will retry after 459.770177ms: waiting for machine to come up
	I0722 11:50:59.482207   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.482871   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.482901   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.482830   60690 retry.go:31] will retry after 459.633846ms: waiting for machine to come up
	I0722 11:50:59.944748   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:50:59.945204   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:50:59.945234   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:50:59.945166   60690 retry.go:31] will retry after 661.206679ms: waiting for machine to come up
	I0722 11:51:00.149442   59477 crio.go:462] duration metric: took 1.519707341s to copy over tarball
	I0722 11:51:00.149516   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:02.402666   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253119001s)
	I0722 11:51:02.402691   59477 crio.go:469] duration metric: took 2.253218813s to extract the tarball
	I0722 11:51:02.402699   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:02.441191   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:02.487854   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:02.487881   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:02.487890   59477 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.30.3 crio true true} ...
	I0722 11:51:02.488035   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-802149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:02.488123   59477 ssh_runner.go:195] Run: crio config
	I0722 11:51:02.532769   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:02.532790   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:02.532801   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:02.532833   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-802149 NodeName:embed-certs-802149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:02.533018   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-802149"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:02.533107   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:02.543311   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:02.543385   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:02.552865   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0722 11:51:02.569231   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:02.584952   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0722 11:51:02.601722   59477 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:02.605830   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:02.617991   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:02.739082   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:02.756204   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149 for IP: 192.168.72.113
	I0722 11:51:02.756226   59477 certs.go:194] generating shared ca certs ...
	I0722 11:51:02.756254   59477 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:02.756452   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:02.756509   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:02.756521   59477 certs.go:256] generating profile certs ...
	I0722 11:51:02.756641   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/client.key
	I0722 11:51:02.756720   59477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key.447fbea1
	I0722 11:51:02.756767   59477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key
	I0722 11:51:02.756907   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:02.756955   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:02.756968   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:02.757004   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:02.757037   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:02.757073   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:02.757130   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:02.758009   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:02.791767   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:02.833143   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:02.859372   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:02.888441   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 11:51:02.926712   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 11:51:02.963931   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:02.986981   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/embed-certs-802149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:03.010885   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:03.033851   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:03.057467   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:03.080230   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:03.096981   59477 ssh_runner.go:195] Run: openssl version
	I0722 11:51:03.103002   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:03.114012   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118692   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.118743   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:03.124703   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:03.134986   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:03.145119   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149396   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.149442   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:03.154767   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:03.165063   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:03.175292   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179650   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.179691   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:03.184991   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:03.195065   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:03.199423   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:03.205027   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:03.210699   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:03.216411   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:03.221888   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:03.227658   59477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:03.233098   59477 kubeadm.go:392] StartCluster: {Name:embed-certs-802149 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-802149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:03.233171   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:03.233221   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.269240   59477 cri.go:89] found id: ""
	I0722 11:51:03.269311   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:03.279739   59477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:03.279758   59477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:03.279809   59477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:03.289523   59477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:03.290456   59477 kubeconfig.go:125] found "embed-certs-802149" server: "https://192.168.72.113:8443"
	I0722 11:51:03.292369   59477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:03.301716   59477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I0722 11:51:03.301749   59477 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:03.301758   59477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:03.301794   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:03.337520   59477 cri.go:89] found id: ""
	I0722 11:51:03.337587   59477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:03.352758   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:03.362272   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:03.362305   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:03.362350   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:03.370574   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:03.370621   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:03.379339   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:03.387427   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:03.387470   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:03.395970   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.404226   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:03.404280   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:03.412683   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:03.420838   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:03.420877   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:03.429146   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:03.440442   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:03.565768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.457748   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:00.608285   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:00.608737   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:00.608759   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:00.608685   60690 retry.go:31] will retry after 728.049334ms: waiting for machine to come up
	I0722 11:51:01.337864   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:01.338406   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:01.338437   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:01.338329   60690 retry.go:31] will retry after 1.060339766s: waiting for machine to come up
	I0722 11:51:02.400096   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:02.400633   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:02.400664   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:02.400580   60690 retry.go:31] will retry after 957.922107ms: waiting for machine to come up
	I0722 11:51:03.360231   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:03.360663   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:03.360692   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:03.360612   60690 retry.go:31] will retry after 1.717107267s: waiting for machine to come up
	I0722 11:51:05.080655   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:05.081172   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:05.081196   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:05.081111   60690 retry.go:31] will retry after 1.708281457s: waiting for machine to come up
	I0722 11:51:04.673803   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.746647   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:04.870194   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:04.870304   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.370787   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.870977   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:05.971259   59477 api_server.go:72] duration metric: took 1.101066217s to wait for apiserver process to appear ...
	I0722 11:51:05.971291   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:05.971313   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:05.971841   59477 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I0722 11:51:06.471490   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.174013   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:09.174041   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:09.174055   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.201462   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.201513   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:09.471884   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.477573   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.477592   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:06.790946   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:06.791370   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:06.791398   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:06.791331   60690 retry.go:31] will retry after 2.398904394s: waiting for machine to come up
	I0722 11:51:09.193385   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:09.193778   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:09.193806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:09.193704   60690 retry.go:31] will retry after 2.18416034s: waiting for machine to come up
	I0722 11:51:09.972279   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:09.982112   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:09.982144   59477 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:10.471495   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:51:10.478784   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:51:10.487326   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:10.487355   59477 api_server.go:131] duration metric: took 4.516056164s to wait for apiserver health ...
	I0722 11:51:10.487365   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:51:10.487374   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:10.488949   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:10.490288   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:10.507047   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:10.526828   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:10.541695   59477 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:10.541731   59477 system_pods.go:61] "coredns-7db6d8ff4d-s2zgw" [13ffaca7-beca-4c43-b7a7-2167fe71295c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:10.541741   59477 system_pods.go:61] "etcd-embed-certs-802149" [f81bfdc3-cc8f-40d3-9f6c-6b84b6490c07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:10.541752   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [325b1597-385e-44df-b65c-2de853d792eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:10.541760   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [25d3ae23-fe5d-46b7-8d93-917d7c83912b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:10.541772   59477 system_pods.go:61] "kube-proxy-t9lkm" [0712acb3-3926-4b78-9c64-a7e46b1a4b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 11:51:10.541780   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [b521ffd3-9422-4df4-9f25-5e81a2d0fa9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:10.541788   59477 system_pods.go:61] "metrics-server-569cc877fc-wm2w8" [db886758-d7bb-41b3-b127-6f9fef839af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:10.541799   59477 system_pods.go:61] "storage-provisioner" [291229fb-8a57-4976-911c-070ccc93adcd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 11:51:10.541810   59477 system_pods.go:74] duration metric: took 14.964696ms to wait for pod list to return data ...
	I0722 11:51:10.541822   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:10.545280   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:10.545307   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:10.545327   59477 node_conditions.go:105] duration metric: took 3.49089ms to run NodePressure ...
	I0722 11:51:10.545349   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:10.812864   59477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817360   59477 kubeadm.go:739] kubelet initialised
	I0722 11:51:10.817379   59477 kubeadm.go:740] duration metric: took 4.491449ms waiting for restarted kubelet to initialise ...
	I0722 11:51:10.817387   59477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:10.823766   59477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.829370   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829399   59477 pod_ready.go:81] duration metric: took 5.605447ms for pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.829411   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "coredns-7db6d8ff4d-s2zgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.829420   59477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.835224   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835250   59477 pod_ready.go:81] duration metric: took 5.819727ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.835261   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "etcd-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.835270   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.840324   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840355   59477 pod_ready.go:81] duration metric: took 5.074415ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.840369   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.840378   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:10.939805   59477 pod_ready.go:97] node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939828   59477 pod_ready.go:81] duration metric: took 99.423274ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:10.939837   59477 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-802149" hosting pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-802149" has status "Ready":"False"
	I0722 11:51:10.939843   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329932   59477 pod_ready.go:92] pod "kube-proxy-t9lkm" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:11.329954   59477 pod_ready.go:81] duration metric: took 390.103451ms for pod "kube-proxy-t9lkm" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:11.329964   59477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:13.336193   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:11.378924   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:11.379301   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | unable to find current IP address of domain old-k8s-version-101261 in network mk-old-k8s-version-101261
	I0722 11:51:11.379324   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | I0722 11:51:11.379257   60690 retry.go:31] will retry after 3.119433482s: waiting for machine to come up
	I0722 11:51:14.501549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502004   59674 main.go:141] libmachine: (old-k8s-version-101261) Found IP for machine: 192.168.50.51
	I0722 11:51:14.502029   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has current primary IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.502040   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserving static IP address...
	I0722 11:51:14.502410   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.502429   59674 main.go:141] libmachine: (old-k8s-version-101261) Reserved static IP address: 192.168.50.51
	I0722 11:51:14.502448   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | skip adding static IP to network mk-old-k8s-version-101261 - found existing host DHCP lease matching {name: "old-k8s-version-101261", mac: "52:54:00:e5:34:9a", ip: "192.168.50.51"}
	I0722 11:51:14.502464   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Getting to WaitForSSH function...
	I0722 11:51:14.502481   59674 main.go:141] libmachine: (old-k8s-version-101261) Waiting for SSH to be available...
	I0722 11:51:14.504709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.504989   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.505018   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.505192   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH client type: external
	I0722 11:51:14.505229   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa (-rw-------)
	I0722 11:51:14.505273   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:14.505287   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | About to run SSH command:
	I0722 11:51:14.505300   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | exit 0
	I0722 11:51:14.628343   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:14.628747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetConfigRaw
	I0722 11:51:14.629343   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:14.631934   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632294   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.632323   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.632541   59674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/config.json ...
	I0722 11:51:14.632730   59674 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:14.632747   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:14.632934   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.635214   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635567   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.635594   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.635663   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.635887   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636070   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.636212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.636492   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.636656   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.636665   59674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:14.745179   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:14.745210   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745456   59674 buildroot.go:166] provisioning hostname "old-k8s-version-101261"
	I0722 11:51:14.745482   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:14.745664   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.748709   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749155   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.749187   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.749356   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.749528   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749708   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.749851   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.750115   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.750325   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.750339   59674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-101261 && echo "old-k8s-version-101261" | sudo tee /etc/hostname
	I0722 11:51:14.878323   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-101261
	
	I0722 11:51:14.878374   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:14.881403   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.881776   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:14.881799   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:14.882004   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:14.882191   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882368   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:14.882523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:14.882714   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:14.882886   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:14.882914   59674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-101261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-101261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-101261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:15.005182   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:15.005211   59674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:15.005232   59674 buildroot.go:174] setting up certificates
	I0722 11:51:15.005244   59674 provision.go:84] configureAuth start
	I0722 11:51:15.005257   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetMachineName
	I0722 11:51:15.005510   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:15.008414   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.008818   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.008842   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.009021   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.011255   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011549   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.011571   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.011712   59674 provision.go:143] copyHostCerts
	I0722 11:51:15.011784   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:15.011798   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:15.011862   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:15.011991   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:15.012003   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:15.012033   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:15.012117   59674 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:15.012126   59674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:15.012156   59674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:15.012235   59674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-101261 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-101261]
	I0722 11:51:16.173298   60225 start.go:364] duration metric: took 2m0.300081245s to acquireMachinesLock for "default-k8s-diff-port-605740"
	I0722 11:51:16.173351   60225 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:16.173359   60225 fix.go:54] fixHost starting: 
	I0722 11:51:16.173747   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:16.173788   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:16.189994   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0722 11:51:16.190364   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:16.190849   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:51:16.190880   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:16.191295   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:16.191520   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:16.191701   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:51:16.193226   60225 fix.go:112] recreateIfNeeded on default-k8s-diff-port-605740: state=Stopped err=<nil>
	I0722 11:51:16.193246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	W0722 11:51:16.193413   60225 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:16.195294   60225 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-605740" ...
	I0722 11:51:15.514379   59674 provision.go:177] copyRemoteCerts
	I0722 11:51:15.514438   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:15.514471   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.517061   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517350   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.517375   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.517523   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.517692   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.517856   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.517976   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:15.598446   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:15.622512   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 11:51:15.645865   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 11:51:15.669136   59674 provision.go:87] duration metric: took 663.880253ms to configureAuth
	I0722 11:51:15.669166   59674 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:15.669360   59674 config.go:182] Loaded profile config "old-k8s-version-101261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:51:15.669441   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.672245   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672720   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.672769   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.672859   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.673066   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673228   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.673348   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.673589   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:15.673764   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:15.673784   59674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:15.935046   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:15.935071   59674 machine.go:97] duration metric: took 1.302328915s to provisionDockerMachine
	I0722 11:51:15.935082   59674 start.go:293] postStartSetup for "old-k8s-version-101261" (driver="kvm2")
	I0722 11:51:15.935094   59674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:15.935114   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:15.935445   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:15.935485   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:15.938454   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.938802   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:15.938828   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:15.939013   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:15.939212   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:15.939341   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:15.939477   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.023536   59674 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:16.028446   59674 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:16.028474   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:16.028542   59674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:16.028639   59674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:16.028746   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:16.038705   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:16.065421   59674 start.go:296] duration metric: took 130.328201ms for postStartSetup
	I0722 11:51:16.065455   59674 fix.go:56] duration metric: took 19.008317885s for fixHost
	I0722 11:51:16.065480   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.068098   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068330   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.068354   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.068486   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.068697   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.068883   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.069035   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.069215   59674 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:16.069371   59674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0722 11:51:16.069380   59674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:16.173115   59674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649076.142588532
	
	I0722 11:51:16.173135   59674 fix.go:216] guest clock: 1721649076.142588532
	I0722 11:51:16.173149   59674 fix.go:229] Guest: 2024-07-22 11:51:16.142588532 +0000 UTC Remote: 2024-07-22 11:51:16.065460257 +0000 UTC m=+220.687192060 (delta=77.128275ms)
	I0722 11:51:16.173189   59674 fix.go:200] guest clock delta is within tolerance: 77.128275ms
	I0722 11:51:16.173196   59674 start.go:83] releasing machines lock for "old-k8s-version-101261", held for 19.116093793s
	I0722 11:51:16.173224   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.173497   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:16.176102   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176522   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.176564   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.176712   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177189   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177387   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .DriverName
	I0722 11:51:16.177476   59674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:16.177519   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.177627   59674 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:16.177650   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHHostname
	I0722 11:51:16.180365   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180402   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180751   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180773   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180806   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:16.180819   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:16.180908   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181020   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHPort
	I0722 11:51:16.181091   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181168   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHKeyPath
	I0722 11:51:16.181254   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181331   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetSSHUsername
	I0722 11:51:16.181346   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.181492   59674 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/old-k8s-version-101261/id_rsa Username:docker}
	I0722 11:51:16.262013   59674 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:16.292921   59674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:16.437729   59674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:16.443840   59674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:16.443929   59674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:16.459686   59674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:16.459703   59674 start.go:495] detecting cgroup driver to use...
	I0722 11:51:16.459761   59674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:16.474514   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:16.487808   59674 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:16.487862   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:16.500977   59674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:16.514210   59674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:16.629558   59674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:16.810274   59674 docker.go:233] disabling docker service ...
	I0722 11:51:16.810351   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:16.829708   59674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:16.848587   59674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:16.973745   59674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:17.114538   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:17.128727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:17.147575   59674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 11:51:17.147628   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.157881   59674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:17.157939   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.168881   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.179407   59674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:17.189894   59674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:17.201433   59674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:17.210901   59674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:17.210954   59674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:17.224683   59674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:17.235711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:17.366833   59674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:17.508852   59674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:17.508932   59674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:17.514001   59674 start.go:563] Will wait 60s for crictl version
	I0722 11:51:17.514051   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:17.517678   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:17.555193   59674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:17.555272   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.583250   59674 ssh_runner.go:195] Run: crio --version
	I0722 11:51:17.615045   59674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 11:51:15.837077   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.838129   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:17.616423   59674 main.go:141] libmachine: (old-k8s-version-101261) Calling .GetIP
	I0722 11:51:17.619616   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620012   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:34:9a", ip: ""} in network mk-old-k8s-version-101261: {Iface:virbr2 ExpiryTime:2024-07-22 12:51:08 +0000 UTC Type:0 Mac:52:54:00:e5:34:9a Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-101261 Clientid:01:52:54:00:e5:34:9a}
	I0722 11:51:17.620043   59674 main.go:141] libmachine: (old-k8s-version-101261) DBG | domain old-k8s-version-101261 has defined IP address 192.168.50.51 and MAC address 52:54:00:e5:34:9a in network mk-old-k8s-version-101261
	I0722 11:51:17.620213   59674 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:17.624632   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:17.639759   59674 kubeadm.go:883] updating cluster {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:17.639882   59674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 11:51:17.639923   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:17.688299   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:17.688370   59674 ssh_runner.go:195] Run: which lz4
	I0722 11:51:17.692462   59674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:17.696723   59674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:17.696761   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 11:51:19.364933   59674 crio.go:462] duration metric: took 1.672511697s to copy over tarball
	I0722 11:51:19.365010   59674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:16.196500   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Start
	I0722 11:51:16.196676   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring networks are active...
	I0722 11:51:16.197307   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network default is active
	I0722 11:51:16.197719   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Ensuring network mk-default-k8s-diff-port-605740 is active
	I0722 11:51:16.198143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Getting domain xml...
	I0722 11:51:16.198839   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Creating domain...
	I0722 11:51:17.463368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting to get IP...
	I0722 11:51:17.464268   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464666   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.464716   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.464632   60829 retry.go:31] will retry after 215.824583ms: waiting for machine to come up
	I0722 11:51:17.682231   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682588   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:17.682616   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:17.682546   60829 retry.go:31] will retry after 345.816562ms: waiting for machine to come up
	I0722 11:51:18.030040   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.030625   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.030526   60829 retry.go:31] will retry after 332.854172ms: waiting for machine to come up
	I0722 11:51:18.365009   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365493   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.365522   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.365455   60829 retry.go:31] will retry after 478.33893ms: waiting for machine to come up
	I0722 11:51:18.846014   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846447   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:18.846475   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:18.846386   60829 retry.go:31] will retry after 484.269461ms: waiting for machine to come up
	I0722 11:51:19.332181   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332572   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:19.332607   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:19.332523   60829 retry.go:31] will retry after 856.318702ms: waiting for machine to come up
	I0722 11:51:20.190301   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.190775   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.190702   60829 retry.go:31] will retry after 747.6345ms: waiting for machine to come up
	I0722 11:51:19.838679   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:21.850685   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:24.338532   59477 pod_ready.go:102] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:22.347245   59674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982204367s)
	I0722 11:51:22.347275   59674 crio.go:469] duration metric: took 2.982313685s to extract the tarball
	I0722 11:51:22.347283   59674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:22.390059   59674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:22.429356   59674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 11:51:22.429383   59674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:22.429499   59674 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.429520   59674 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.429524   59674 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.429545   59674 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.429497   59674 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.429529   59674 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.429498   59674 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431549   59674 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.431556   59674 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 11:51:22.431570   59674 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.431588   59674 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.431611   59674 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.431555   59674 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.431666   59674 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.431675   59674 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.603462   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.604733   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.608788   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.611177   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.616981   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.634838   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.674004   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 11:51:22.706162   59674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:22.730052   59674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 11:51:22.730112   59674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 11:51:22.730129   59674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.730142   59674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.730183   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.730196   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.760229   59674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 11:51:22.760271   59674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.760322   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787207   59674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 11:51:22.787244   59674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 11:51:22.787254   59674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.787273   59674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.787303   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.787311   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.828611   59674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 11:51:22.828656   59674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.828703   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.841609   59674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 11:51:22.841648   59674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 11:51:22.841692   59674 ssh_runner.go:195] Run: which crictl
	I0722 11:51:22.913517   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 11:51:22.913549   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 11:51:22.913557   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 11:51:22.913519   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 11:51:22.913605   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 11:51:22.913625   59674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 11:51:23.063640   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 11:51:23.063652   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 11:51:23.063742   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 11:51:23.063766   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 11:51:23.070202   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 11:51:23.073265   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 11:51:23.073310   59674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 11:51:23.073358   59674 cache_images.go:92] duration metric: took 643.962788ms to LoadCachedImages
	W0722 11:51:23.073425   59674 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 11:51:23.073438   59674 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0722 11:51:23.073584   59674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-101261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:23.073666   59674 ssh_runner.go:195] Run: crio config
	I0722 11:51:23.125532   59674 cni.go:84] Creating CNI manager for ""
	I0722 11:51:23.125554   59674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:23.125566   59674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:23.125590   59674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-101261 NodeName:old-k8s-version-101261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 11:51:23.125753   59674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-101261"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:23.125818   59674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 11:51:23.136207   59674 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:23.136277   59674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:23.146103   59674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0722 11:51:23.163756   59674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:23.183108   59674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0722 11:51:23.201223   59674 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:23.205369   59674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:23.218711   59674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:23.339415   59674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:23.358601   59674 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261 for IP: 192.168.50.51
	I0722 11:51:23.358622   59674 certs.go:194] generating shared ca certs ...
	I0722 11:51:23.358654   59674 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:23.358813   59674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:23.358865   59674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:23.358877   59674 certs.go:256] generating profile certs ...
	I0722 11:51:23.358990   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.key
	I0722 11:51:23.359058   59674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key.455618c3
	I0722 11:51:23.359110   59674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key
	I0722 11:51:23.359248   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:23.359286   59674 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:23.359300   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:23.359332   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:23.359363   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:23.359393   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:23.359445   59674 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:23.360290   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:23.407113   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:23.439799   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:23.484136   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:23.513902   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 11:51:23.551266   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:23.581930   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:23.612470   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:51:23.644003   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:23.671068   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:23.695514   59674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:23.722711   59674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:23.742312   59674 ssh_runner.go:195] Run: openssl version
	I0722 11:51:23.749680   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:23.763975   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769799   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.769848   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:23.777286   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:23.788007   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:23.799005   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803367   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.803405   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:23.809239   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:23.820095   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:23.832492   59674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837230   59674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.837268   59674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:23.842861   59674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:23.853772   59674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:23.858178   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:23.864134   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:23.870035   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:23.875939   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:23.881552   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:23.887286   59674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:23.893029   59674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-101261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-101261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:23.893133   59674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:23.893184   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:23.939121   59674 cri.go:89] found id: ""
	I0722 11:51:23.939187   59674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:23.951089   59674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:23.951108   59674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:23.951154   59674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:23.962212   59674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:23.963627   59674 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-101261" does not appear in /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:51:23.964627   59674 kubeconfig.go:62] /home/jenkins/minikube-integration/19313-5960/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-101261" cluster setting kubeconfig missing "old-k8s-version-101261" context setting]
	I0722 11:51:23.966075   59674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:24.070513   59674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:24.081628   59674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0722 11:51:24.081662   59674 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:24.081674   59674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:24.081728   59674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:24.117673   59674 cri.go:89] found id: ""
	I0722 11:51:24.117750   59674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:24.134081   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:24.144294   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:24.144315   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:24.144366   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:51:24.153640   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:24.153685   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:24.163252   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:51:24.173762   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:24.173815   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:24.183272   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.194090   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:24.194148   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:24.205213   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:51:24.215709   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:24.215787   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:24.226876   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:24.237966   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:24.378277   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:20.939620   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940073   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:20.940106   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:20.940007   60829 retry.go:31] will retry after 1.295925992s: waiting for machine to come up
	I0722 11:51:22.237614   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238096   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:22.238128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:22.238045   60829 retry.go:31] will retry after 1.652562745s: waiting for machine to come up
	I0722 11:51:23.891976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892496   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:23.892519   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:23.892468   60829 retry.go:31] will retry after 2.313623774s: waiting for machine to come up
	I0722 11:51:24.839903   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:51:24.839939   59477 pod_ready.go:81] duration metric: took 13.509966584s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:24.839957   59477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:26.847104   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:29.345675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:25.787025   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408710522s)
	I0722 11:51:25.787059   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.031231   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.120122   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:26.216108   59674 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:26.216204   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.717257   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.216782   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:27.716476   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.216529   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:28.716302   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.216249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:29.717071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:30.216364   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:26.207294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207841   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:26.207867   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:26.207805   60829 retry.go:31] will retry after 2.606127418s: waiting for machine to come up
	I0722 11:51:28.817432   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817795   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:28.817851   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:28.817748   60829 retry.go:31] will retry after 2.617524673s: waiting for machine to come up
	I0722 11:51:31.346476   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:33.847820   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:30.716961   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.216474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.716685   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.216748   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:32.716886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.216333   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:33.717052   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.217128   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:34.716466   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:35.216975   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:31.436413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436710   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | unable to find current IP address of domain default-k8s-diff-port-605740 in network mk-default-k8s-diff-port-605740
	I0722 11:51:31.436745   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | I0722 11:51:31.436665   60829 retry.go:31] will retry after 3.455203757s: waiting for machine to come up
	I0722 11:51:34.896151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.896595   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Found IP for machine: 192.168.39.87
	I0722 11:51:34.896619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserving static IP address...
	I0722 11:51:34.896637   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has current primary IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.897007   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Reserved static IP address: 192.168.39.87
	I0722 11:51:34.897037   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.897074   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Waiting for SSH to be available...
	I0722 11:51:34.897094   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | skip adding static IP to network mk-default-k8s-diff-port-605740 - found existing host DHCP lease matching {name: "default-k8s-diff-port-605740", mac: "52:54:00:23:45:e9", ip: "192.168.39.87"}
	I0722 11:51:34.897107   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Getting to WaitForSSH function...
	I0722 11:51:34.899104   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:34.899450   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:34.899570   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH client type: external
	I0722 11:51:34.899594   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa (-rw-------)
	I0722 11:51:34.899619   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:34.899636   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | About to run SSH command:
	I0722 11:51:34.899651   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | exit 0
	I0722 11:51:35.028440   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:35.028814   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetConfigRaw
	I0722 11:51:35.029407   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.031646   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.031967   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.031998   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.032179   60225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/config.json ...
	I0722 11:51:35.032355   60225 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:35.032372   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:35.032587   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.034608   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.034924   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.034944   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.035089   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.035242   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035368   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.035497   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.035637   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.035812   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.035823   60225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:35.148621   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:35.148655   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.148914   60225 buildroot.go:166] provisioning hostname "default-k8s-diff-port-605740"
	I0722 11:51:35.148945   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.149128   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.151753   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152146   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.152170   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.152294   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.152461   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152591   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.152706   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.152847   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.153057   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.153079   60225 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-605740 && echo "default-k8s-diff-port-605740" | sudo tee /etc/hostname
	I0722 11:51:35.278248   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-605740
	
	I0722 11:51:35.278277   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.281778   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282158   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.282189   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.282361   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.282539   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282712   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.282826   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.283014   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.283239   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.283266   60225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-605740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-605740/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-605740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:35.405142   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:35.405176   60225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:35.405215   60225 buildroot.go:174] setting up certificates
	I0722 11:51:35.405228   60225 provision.go:84] configureAuth start
	I0722 11:51:35.405240   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetMachineName
	I0722 11:51:35.405502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:35.407912   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408262   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.408284   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.408435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.410456   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410794   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.410821   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.410959   60225 provision.go:143] copyHostCerts
	I0722 11:51:35.411021   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:35.411034   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:35.411613   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:35.411720   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:35.411729   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:35.411749   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:35.411803   60225 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:35.411811   60225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:35.411827   60225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:35.411881   60225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-605740 san=[127.0.0.1 192.168.39.87 default-k8s-diff-port-605740 localhost minikube]
	I0722 11:51:36.476985   58921 start.go:364] duration metric: took 53.473936955s to acquireMachinesLock for "no-preload-339929"
	I0722 11:51:36.477060   58921 start.go:96] Skipping create...Using existing machine configuration
	I0722 11:51:36.477071   58921 fix.go:54] fixHost starting: 
	I0722 11:51:36.477497   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:51:36.477538   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:51:36.494783   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0722 11:51:36.495220   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:51:36.495728   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:51:36.495749   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:51:36.496045   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:51:36.496241   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:36.496399   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:51:36.497658   58921 fix.go:112] recreateIfNeeded on no-preload-339929: state=Stopped err=<nil>
	I0722 11:51:36.497681   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	W0722 11:51:36.497840   58921 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 11:51:36.499655   58921 out.go:177] * Restarting existing kvm2 VM for "no-preload-339929" ...
	I0722 11:51:35.787061   60225 provision.go:177] copyRemoteCerts
	I0722 11:51:35.787119   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:35.787143   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.789647   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790048   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.790081   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.790289   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.790502   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.790665   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.790815   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:35.878791   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 11:51:35.902034   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:35.925234   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:35.948008   60225 provision.go:87] duration metric: took 542.764534ms to configureAuth
	I0722 11:51:35.948038   60225 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:35.948231   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:51:35.948315   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:35.951029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951381   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:35.951413   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:35.951561   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:35.951777   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.951927   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:35.952064   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:35.952196   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:35.952447   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:35.952465   60225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:36.234284   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:36.234329   60225 machine.go:97] duration metric: took 1.201960693s to provisionDockerMachine
	I0722 11:51:36.234342   60225 start.go:293] postStartSetup for "default-k8s-diff-port-605740" (driver="kvm2")
	I0722 11:51:36.234355   60225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:36.234375   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.234712   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:36.234742   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.237536   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.237897   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.237928   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.238045   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.238253   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.238435   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.238580   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.322600   60225 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:36.326734   60225 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:36.326753   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:36.326809   60225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:36.326893   60225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:36.326981   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:36.335877   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:36.359701   60225 start.go:296] duration metric: took 125.346106ms for postStartSetup
	I0722 11:51:36.359734   60225 fix.go:56] duration metric: took 20.186375753s for fixHost
	I0722 11:51:36.359751   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.362282   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.362603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.362782   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.362976   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363121   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.363218   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.363355   60225 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:36.363506   60225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0722 11:51:36.363515   60225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:36.476833   60225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649096.450051771
	
	I0722 11:51:36.476869   60225 fix.go:216] guest clock: 1721649096.450051771
	I0722 11:51:36.476877   60225 fix.go:229] Guest: 2024-07-22 11:51:36.450051771 +0000 UTC Remote: 2024-07-22 11:51:36.359737602 +0000 UTC m=+140.620851572 (delta=90.314169ms)
	I0722 11:51:36.476895   60225 fix.go:200] guest clock delta is within tolerance: 90.314169ms
	I0722 11:51:36.476900   60225 start.go:83] releasing machines lock for "default-k8s-diff-port-605740", held for 20.303575504s
	I0722 11:51:36.476926   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.477201   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:36.480567   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.480990   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.481020   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.481182   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481657   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481827   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:51:36.481906   60225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:36.481947   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.482026   60225 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:36.482044   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:51:36.484577   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.484762   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485029   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485054   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485199   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:36.485224   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:36.485246   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:51:36.485406   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485524   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:51:36.485537   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:51:36.485729   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.485788   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:51:36.565892   60225 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:36.592221   60225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:36.739153   60225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:36.746870   60225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:36.746933   60225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:36.766745   60225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:36.766769   60225 start.go:495] detecting cgroup driver to use...
	I0722 11:51:36.766837   60225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:36.782140   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:36.797037   60225 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:36.797118   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:36.810796   60225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:36.823955   60225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:36.943613   60225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:37.123238   60225 docker.go:233] disabling docker service ...
	I0722 11:51:37.123318   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:37.138682   60225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:37.153426   60225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:37.279469   60225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:37.404250   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:37.428047   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:37.446939   60225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 11:51:37.446994   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.457326   60225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:37.457400   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.468141   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.479246   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.489857   60225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:37.502713   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.517197   60225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.537115   60225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:37.548917   60225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:37.559530   60225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:37.559590   60225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:37.574785   60225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:37.585589   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:37.730483   60225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:37.888282   60225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:37.888373   60225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:37.893498   60225 start.go:563] Will wait 60s for crictl version
	I0722 11:51:37.893555   60225 ssh_runner.go:195] Run: which crictl
	I0722 11:51:37.897212   60225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:37.940959   60225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:37.941054   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:37.969273   60225 ssh_runner.go:195] Run: crio --version
	I0722 11:51:38.001475   60225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 11:51:36.345564   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:38.349105   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:35.716593   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.216517   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.716294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.217023   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:37.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.216231   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:38.716522   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.216492   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:39.716478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.216337   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:36.500994   58921 main.go:141] libmachine: (no-preload-339929) Calling .Start
	I0722 11:51:36.501149   58921 main.go:141] libmachine: (no-preload-339929) Ensuring networks are active...
	I0722 11:51:36.501737   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network default is active
	I0722 11:51:36.502002   58921 main.go:141] libmachine: (no-preload-339929) Ensuring network mk-no-preload-339929 is active
	I0722 11:51:36.502421   58921 main.go:141] libmachine: (no-preload-339929) Getting domain xml...
	I0722 11:51:36.503225   58921 main.go:141] libmachine: (no-preload-339929) Creating domain...
	I0722 11:51:37.794982   58921 main.go:141] libmachine: (no-preload-339929) Waiting to get IP...
	I0722 11:51:37.795825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:37.796235   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:37.796291   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:37.796218   61023 retry.go:31] will retry after 217.454766ms: waiting for machine to come up
	I0722 11:51:38.015757   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.016236   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.016258   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.016185   61023 retry.go:31] will retry after 374.564997ms: waiting for machine to come up
	I0722 11:51:38.392755   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.393280   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.393310   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.393238   61023 retry.go:31] will retry after 462.45005ms: waiting for machine to come up
	I0722 11:51:38.856969   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:38.857508   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:38.857539   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:38.857455   61023 retry.go:31] will retry after 440.89249ms: waiting for machine to come up
	I0722 11:51:39.300253   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:39.300834   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:39.300860   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:39.300774   61023 retry.go:31] will retry after 746.547558ms: waiting for machine to come up
	I0722 11:51:40.048708   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.049175   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.049211   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.049133   61023 retry.go:31] will retry after 608.540931ms: waiting for machine to come up
	I0722 11:51:38.002695   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetIP
	I0722 11:51:38.005678   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006057   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:51:38.006085   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:51:38.006276   60225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:38.010327   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:38.023216   60225 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:38.023326   60225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 11:51:38.023375   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:38.059519   60225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 11:51:38.059603   60225 ssh_runner.go:195] Run: which lz4
	I0722 11:51:38.063709   60225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 11:51:38.068879   60225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 11:51:38.068903   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 11:51:39.570299   60225 crio.go:462] duration metric: took 1.50662056s to copy over tarball
	I0722 11:51:39.570380   60225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 11:51:40.846268   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:42.848761   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:40.716395   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.216516   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:41.716363   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.217236   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:42.716938   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.216950   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:43.717242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.216318   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.716925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.216991   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:40.658992   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:40.659502   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:40.659542   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:40.659447   61023 retry.go:31] will retry after 974.447874ms: waiting for machine to come up
	I0722 11:51:41.636057   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:41.636596   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:41.636620   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:41.636538   61023 retry.go:31] will retry after 1.040271869s: waiting for machine to come up
	I0722 11:51:42.678559   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:42.678995   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:42.679018   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:42.678938   61023 retry.go:31] will retry after 1.797018808s: waiting for machine to come up
	I0722 11:51:44.477360   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:44.477729   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:44.477764   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:44.477687   61023 retry.go:31] will retry after 2.040933698s: waiting for machine to come up
	I0722 11:51:41.921416   60225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35100934s)
	I0722 11:51:41.921453   60225 crio.go:469] duration metric: took 2.351127326s to extract the tarball
	I0722 11:51:41.921460   60225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 11:51:41.959856   60225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:42.011834   60225 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 11:51:42.011864   60225 cache_images.go:84] Images are preloaded, skipping loading
	I0722 11:51:42.011874   60225 kubeadm.go:934] updating node { 192.168.39.87 8444 v1.30.3 crio true true} ...
	I0722 11:51:42.012016   60225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-605740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:51:42.012101   60225 ssh_runner.go:195] Run: crio config
	I0722 11:51:42.067629   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:42.067650   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:42.067661   60225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:51:42.067681   60225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-605740 NodeName:default-k8s-diff-port-605740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:51:42.067849   60225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-605740"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:51:42.067926   60225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 11:51:42.079267   60225 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:51:42.079331   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:51:42.089696   60225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 11:51:42.109204   60225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 11:51:42.125186   60225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 11:51:42.143217   60225 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0722 11:51:42.147117   60225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:42.159283   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:42.297313   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:51:42.315795   60225 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740 for IP: 192.168.39.87
	I0722 11:51:42.315819   60225 certs.go:194] generating shared ca certs ...
	I0722 11:51:42.315838   60225 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:51:42.316036   60225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:51:42.316104   60225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:51:42.316121   60225 certs.go:256] generating profile certs ...
	I0722 11:51:42.316211   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.key
	I0722 11:51:42.316281   60225 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key.82803a6c
	I0722 11:51:42.316344   60225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key
	I0722 11:51:42.316515   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:51:42.316562   60225 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:51:42.316575   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:51:42.316606   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:51:42.316642   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:51:42.316673   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:51:42.316729   60225 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:42.317611   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:51:42.368371   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:51:42.396161   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:51:42.423661   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:51:42.461478   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 11:51:42.492145   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:51:42.523047   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:51:42.551774   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 11:51:42.576922   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:51:42.600869   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:51:42.624223   60225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:51:42.647454   60225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:51:42.664055   60225 ssh_runner.go:195] Run: openssl version
	I0722 11:51:42.670102   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:51:42.681220   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685927   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.685979   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:51:42.691823   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:51:42.702680   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:51:42.713592   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.719980   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.720042   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:51:42.727573   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:51:42.741805   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:51:42.756511   60225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.761951   60225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.762007   60225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:51:42.767540   60225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:51:42.777758   60225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:51:42.782242   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:51:42.787989   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:51:42.793552   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:51:42.799083   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:51:42.804666   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:51:42.810222   60225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:51:42.818545   60225 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-605740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-605740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:51:42.818639   60225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:51:42.818689   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.869630   60225 cri.go:89] found id: ""
	I0722 11:51:42.869706   60225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:51:42.881642   60225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:51:42.881666   60225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:51:42.881716   60225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:51:42.891566   60225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:51:42.892605   60225 kubeconfig.go:125] found "default-k8s-diff-port-605740" server: "https://192.168.39.87:8444"
	I0722 11:51:42.894819   60225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:51:42.906152   60225 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0722 11:51:42.906184   60225 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:51:42.906197   60225 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:51:42.906244   60225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:51:42.943687   60225 cri.go:89] found id: ""
	I0722 11:51:42.943765   60225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:51:42.962989   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:51:42.974334   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:51:42.974351   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:51:42.974398   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:51:42.985009   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:51:42.985069   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:51:42.996084   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:51:43.006592   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:51:43.006643   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:51:43.017500   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.026779   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:51:43.026853   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:51:43.037913   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:51:43.048504   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:51:43.048548   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:51:43.058045   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:51:43.067626   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:43.195638   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.027881   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.237863   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.306672   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:44.409525   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:51:44.409655   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:44.909710   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.409772   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:45.465579   60225 api_server.go:72] duration metric: took 1.056052731s to wait for apiserver process to appear ...
	I0722 11:51:45.465613   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:51:45.465634   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:45.466164   60225 api_server.go:269] stopped: https://192.168.39.87:8444/healthz: Get "https://192.168.39.87:8444/healthz": dial tcp 192.168.39.87:8444: connect: connection refused
	I0722 11:51:45.349550   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:47.847373   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:45.717299   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.216545   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.717273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.217030   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:47.716837   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.216368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:48.716993   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.216273   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:49.717087   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:50.216313   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:46.520086   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:46.520553   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:46.520583   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:46.520514   61023 retry.go:31] will retry after 2.21537525s: waiting for machine to come up
	I0722 11:51:48.737964   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:48.738435   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:48.738478   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:48.738387   61023 retry.go:31] will retry after 3.351574636s: waiting for machine to come up
	I0722 11:51:45.966026   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:48.955885   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:48.955919   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:48.955938   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.001144   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.001176   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.001190   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.011522   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:51:49.011567   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:51:49.466002   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.470318   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.470339   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:49.965932   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:49.974634   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:49.974659   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.466354   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.471348   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.471375   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:50.966014   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:50.970321   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:50.970344   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.466452   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.470676   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.470703   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:51.966303   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:51.970628   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:51:51.970654   60225 api_server.go:103] status: https://192.168.39.87:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:51:52.466173   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:51:52.473153   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:51:52.479257   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:51:52.479280   60225 api_server.go:131] duration metric: took 7.013661456s to wait for apiserver health ...
	I0722 11:51:52.479289   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:51:52.479295   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:51:52.480886   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:51:50.346624   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:52.847483   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:50.716844   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.216793   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:51.716262   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.216710   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.216424   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:53.716256   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.216266   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:54.716357   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:55.217214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:52.091480   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:52.091931   58921 main.go:141] libmachine: (no-preload-339929) DBG | unable to find current IP address of domain no-preload-339929 in network mk-no-preload-339929
	I0722 11:51:52.091958   58921 main.go:141] libmachine: (no-preload-339929) DBG | I0722 11:51:52.091893   61023 retry.go:31] will retry after 3.862235046s: waiting for machine to come up
	I0722 11:51:52.481952   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:51:52.493302   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:51:52.517874   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:51:52.525926   60225 system_pods.go:59] 8 kube-system pods found
	I0722 11:51:52.525951   60225 system_pods.go:61] "coredns-7db6d8ff4d-dp56v" [5027da7d-5dc8-4ac5-ae15-ec99dffdce28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:51:52.525960   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [648d4b21-2c2a-4ac7-a114-660379463d7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:51:52.525967   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [89ae1525-c944-4645-8951-e8834c9347b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:51:52.525978   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [ff83ae5c-1dea-4633-afb8-c6487d1463b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:51:52.525983   60225 system_pods.go:61] "kube-proxy-ssttk" [6967a89c-ac7d-413f-bd0e-504367edca66] Running
	I0722 11:51:52.525991   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [f930864f-4486-4c95-96f2-3004f58e80b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:51:52.526001   60225 system_pods.go:61] "metrics-server-569cc877fc-mzcvn" [9913463e-4ff9-4baa-a26e-76694605652e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:51:52.526009   60225 system_pods.go:61] "storage-provisioner" [08880428-a182-4540-a6f7-afffa3fc82a6] Running
	I0722 11:51:52.526020   60225 system_pods.go:74] duration metric: took 8.125407ms to wait for pod list to return data ...
	I0722 11:51:52.526030   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:51:52.528765   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:51:52.528788   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:51:52.528801   60225 node_conditions.go:105] duration metric: took 2.765554ms to run NodePressure ...
	I0722 11:51:52.528822   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:51:52.797071   60225 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802281   60225 kubeadm.go:739] kubelet initialised
	I0722 11:51:52.802311   60225 kubeadm.go:740] duration metric: took 5.210344ms waiting for restarted kubelet to initialise ...
	I0722 11:51:52.802322   60225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:51:52.808512   60225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.819816   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819849   60225 pod_ready.go:81] duration metric: took 11.258701ms for pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.819861   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "coredns-7db6d8ff4d-dp56v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.819870   60225 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.825916   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825958   60225 pod_ready.go:81] duration metric: took 6.076418ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.825977   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.825990   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:52.832243   60225 pod_ready.go:97] node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832272   60225 pod_ready.go:81] duration metric: took 6.26533ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	E0722 11:51:52.832286   60225 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-605740" hosting pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-605740" has status "Ready":"False"
	I0722 11:51:52.832295   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:51:54.841497   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.958678   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959165   58921 main.go:141] libmachine: (no-preload-339929) Found IP for machine: 192.168.61.112
	I0722 11:51:55.959188   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has current primary IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.959195   58921 main.go:141] libmachine: (no-preload-339929) Reserving static IP address...
	I0722 11:51:55.959744   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.959774   58921 main.go:141] libmachine: (no-preload-339929) DBG | skip adding static IP to network mk-no-preload-339929 - found existing host DHCP lease matching {name: "no-preload-339929", mac: "52:54:00:8d:72:69", ip: "192.168.61.112"}
	I0722 11:51:55.959790   58921 main.go:141] libmachine: (no-preload-339929) Reserved static IP address: 192.168.61.112
	I0722 11:51:55.959806   58921 main.go:141] libmachine: (no-preload-339929) Waiting for SSH to be available...
	I0722 11:51:55.959817   58921 main.go:141] libmachine: (no-preload-339929) DBG | Getting to WaitForSSH function...
	I0722 11:51:55.962308   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962703   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:55.962724   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:55.962853   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH client type: external
	I0722 11:51:55.962876   58921 main.go:141] libmachine: (no-preload-339929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa (-rw-------)
	I0722 11:51:55.962924   58921 main.go:141] libmachine: (no-preload-339929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 11:51:55.962946   58921 main.go:141] libmachine: (no-preload-339929) DBG | About to run SSH command:
	I0722 11:51:55.962963   58921 main.go:141] libmachine: (no-preload-339929) DBG | exit 0
	I0722 11:51:56.084629   58921 main.go:141] libmachine: (no-preload-339929) DBG | SSH cmd err, output: <nil>: 
	I0722 11:51:56.085007   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetConfigRaw
	I0722 11:51:56.085616   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.088120   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088546   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.088576   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.088842   58921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/config.json ...
	I0722 11:51:56.089066   58921 machine.go:94] provisionDockerMachine start ...
	I0722 11:51:56.089088   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:56.089276   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.091216   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091486   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.091508   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.091653   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.091823   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.091982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.092132   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.092262   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.092434   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.092444   58921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 11:51:56.192862   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 11:51:56.192891   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193179   58921 buildroot.go:166] provisioning hostname "no-preload-339929"
	I0722 11:51:56.193207   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.193465   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.196195   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196607   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.196637   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.196843   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.197048   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197213   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.197358   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.197509   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.197707   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.197722   58921 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-339929 && echo "no-preload-339929" | sudo tee /etc/hostname
	I0722 11:51:56.309997   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-339929
	
	I0722 11:51:56.310019   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.312923   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313263   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.313290   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.313481   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.313682   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.313882   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.314043   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.314223   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.314413   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.314435   58921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-339929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-339929/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-339929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 11:51:56.430088   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 11:51:56.430113   58921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19313-5960/.minikube CaCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19313-5960/.minikube}
	I0722 11:51:56.430136   58921 buildroot.go:174] setting up certificates
	I0722 11:51:56.430147   58921 provision.go:84] configureAuth start
	I0722 11:51:56.430158   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetMachineName
	I0722 11:51:56.430428   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:56.433041   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433421   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.433449   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.433619   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.436002   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436300   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.436333   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.436508   58921 provision.go:143] copyHostCerts
	I0722 11:51:56.436579   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem, removing ...
	I0722 11:51:56.436595   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem
	I0722 11:51:56.436665   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/ca.pem (1082 bytes)
	I0722 11:51:56.436828   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem, removing ...
	I0722 11:51:56.436843   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem
	I0722 11:51:56.436876   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/cert.pem (1123 bytes)
	I0722 11:51:56.436950   58921 exec_runner.go:144] found /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem, removing ...
	I0722 11:51:56.436961   58921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem
	I0722 11:51:56.436987   58921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19313-5960/.minikube/key.pem (1679 bytes)
	I0722 11:51:56.437053   58921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem org=jenkins.no-preload-339929 san=[127.0.0.1 192.168.61.112 localhost minikube no-preload-339929]
	I0722 11:51:56.792128   58921 provision.go:177] copyRemoteCerts
	I0722 11:51:56.792205   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 11:51:56.792238   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.794952   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795254   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.795283   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.795439   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.795636   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.795772   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.795944   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:56.874574   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 11:51:56.898653   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 11:51:56.923200   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 11:51:56.946393   58921 provision.go:87] duration metric: took 516.233368ms to configureAuth
	I0722 11:51:56.946416   58921 buildroot.go:189] setting minikube options for container-runtime
	I0722 11:51:56.946612   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:51:56.946702   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:56.949412   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949923   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:56.949955   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:56.949982   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:56.950195   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950330   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:56.950479   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:56.950591   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:56.950844   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:56.950865   58921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 11:51:57.225885   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 11:51:57.225909   58921 machine.go:97] duration metric: took 1.136828183s to provisionDockerMachine
	I0722 11:51:57.225924   58921 start.go:293] postStartSetup for "no-preload-339929" (driver="kvm2")
	I0722 11:51:57.225941   58921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 11:51:57.225967   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.226315   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 11:51:57.226346   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.229404   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.229787   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.229816   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.230008   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.230210   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.230382   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.230518   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.317585   58921 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 11:51:57.323102   58921 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 11:51:57.323133   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/addons for local assets ...
	I0722 11:51:57.323218   58921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19313-5960/.minikube/files for local assets ...
	I0722 11:51:57.323319   58921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem -> 130982.pem in /etc/ssl/certs
	I0722 11:51:57.323446   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 11:51:57.336656   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:51:57.365241   58921 start.go:296] duration metric: took 139.301981ms for postStartSetup
	I0722 11:51:57.365299   58921 fix.go:56] duration metric: took 20.888227284s for fixHost
	I0722 11:51:57.365322   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.368451   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368792   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.368825   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.368964   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.369191   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369362   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.369532   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.369698   58921 main.go:141] libmachine: Using SSH client type: native
	I0722 11:51:57.369918   58921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I0722 11:51:57.369929   58921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 11:51:57.478389   58921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649117.454433204
	
	I0722 11:51:57.478414   58921 fix.go:216] guest clock: 1721649117.454433204
	I0722 11:51:57.478425   58921 fix.go:229] Guest: 2024-07-22 11:51:57.454433204 +0000 UTC Remote: 2024-07-22 11:51:57.365303623 +0000 UTC m=+356.953957779 (delta=89.129581ms)
	I0722 11:51:57.478469   58921 fix.go:200] guest clock delta is within tolerance: 89.129581ms
	I0722 11:51:57.478488   58921 start.go:83] releasing machines lock for "no-preload-339929", held for 21.001447333s
	I0722 11:51:57.478515   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.478798   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:57.481848   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482283   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.482313   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.482464   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483024   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483211   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:51:57.483286   58921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 11:51:57.483339   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.483594   58921 ssh_runner.go:195] Run: cat /version.json
	I0722 11:51:57.483620   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:51:57.486149   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486402   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486561   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486746   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.486791   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:57.486808   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:57.486969   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487059   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:51:57.487141   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487289   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.487306   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:51:57.487460   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:51:57.487645   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:51:57.591994   58921 ssh_runner.go:195] Run: systemctl --version
	I0722 11:51:57.598617   58921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 11:51:57.754364   58921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 11:51:57.761045   58921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 11:51:57.761104   58921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 11:51:57.778215   58921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 11:51:57.778244   58921 start.go:495] detecting cgroup driver to use...
	I0722 11:51:57.778315   58921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 11:51:57.794964   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 11:51:57.811232   58921 docker.go:217] disabling cri-docker service (if available) ...
	I0722 11:51:57.811292   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 11:51:57.826950   58921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 11:51:57.842302   58921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 11:51:57.971792   58921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 11:51:58.129047   58921 docker.go:233] disabling docker service ...
	I0722 11:51:58.129104   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 11:51:58.146348   58921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 11:51:58.160958   58921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 11:51:58.294011   58921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 11:51:58.414996   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 11:51:58.430045   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 11:51:58.456092   58921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 11:51:58.456186   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.471939   58921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 11:51:58.472003   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.485092   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.497749   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.510721   58921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 11:51:58.522286   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.535122   58921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.555717   58921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 11:51:58.567386   58921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 11:51:58.577638   58921 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 11:51:58.577717   58921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 11:51:58.592354   58921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 11:51:58.602448   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:51:58.729652   58921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 11:51:58.881699   58921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 11:51:58.881761   58921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 11:51:58.887049   58921 start.go:563] Will wait 60s for crictl version
	I0722 11:51:58.887099   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:58.890867   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 11:51:58.933081   58921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 11:51:58.933171   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.960418   58921 ssh_runner.go:195] Run: crio --version
	I0722 11:51:58.992787   58921 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 11:51:54.847605   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:57.346927   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:55.716788   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.216920   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:56.716328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:57.717149   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.217011   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.716511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.216969   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:59.717145   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:00.216454   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:51:58.994009   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetIP
	I0722 11:51:58.996823   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997258   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:51:58.997279   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:51:58.997465   58921 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 11:51:59.001724   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:51:59.014700   58921 kubeadm.go:883] updating cluster {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 11:51:59.014819   58921 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 11:51:59.014847   58921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 11:51:59.049135   58921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 11:51:59.049167   58921 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 11:51:59.049252   58921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.049268   58921 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.049310   58921 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.049314   58921 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.049335   58921 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.049249   58921 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.049445   58921 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.049480   58921 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 11:51:59.050964   58921 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.050974   58921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.050994   58921 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.051032   58921 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 11:51:59.051056   58921 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.051075   58921 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.051098   58921 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.051039   58921 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.220737   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.233831   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.239620   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.240125   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.240548   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.269898   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 11:51:59.293368   58921 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 11:51:59.293420   58921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.293468   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.309956   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.336323   58921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 11:51:59.359284   58921 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.359336   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.359236   58921 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 11:51:59.359371   58921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.359400   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.371412   58921 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 11:51:59.371449   58921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.371485   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.404322   58921 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 11:51:59.404364   58921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.404427   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542134   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 11:51:59.542279   58921 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 11:51:59.542331   58921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.542347   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 11:51:59.542360   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.542383   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 11:51:59.542439   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 11:51:59.542444   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 11:51:59.542691   58921 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 11:51:59.542725   58921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.542757   58921 ssh_runner.go:195] Run: which crictl
	I0722 11:51:59.653771   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653819   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:51:59.653859   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 11:51:59.653877   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.653935   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.653945   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:51:59.653994   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:51:59.654000   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654034   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 11:51:59.654078   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:51:59.654091   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:51:59.654101   58921 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 11:51:59.706185   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706207   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706218   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 11:51:59.706250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 11:51:59.706256   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706292   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:51:59.706298   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 11:51:59.706369   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 11:51:59.706464   58921 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 11:51:59.706509   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 11:51:59.706554   58921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:51:57.342604   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.839045   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:51:59.846551   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:02.346391   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.347558   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:00.717154   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.216534   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:01.716349   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.217140   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.716458   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.216539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:03.717179   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.216994   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:04.716264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:05.216962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:02.170882   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.464606279s)
	I0722 11:52:02.170914   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.464582845s)
	I0722 11:52:02.170942   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 11:52:02.170923   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 11:52:02.170949   58921 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.464369058s)
	I0722 11:52:02.170970   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:02.170972   58921 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 11:52:02.171024   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 11:52:04.139100   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.9680515s)
	I0722 11:52:04.139132   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 11:52:04.139166   58921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:04.139250   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 11:52:01.840270   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.339017   60225 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:04.840071   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.840097   60225 pod_ready.go:81] duration metric: took 12.007790604s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.840110   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845312   60225 pod_ready.go:92] pod "kube-proxy-ssttk" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.845336   60225 pod_ready.go:81] duration metric: took 5.218113ms for pod "kube-proxy-ssttk" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.845348   60225 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850239   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:04.850264   60225 pod_ready.go:81] duration metric: took 4.905551ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:04.850273   60225 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:06.849408   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.347362   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:05.716753   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.216886   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:06.717064   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.217069   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.716953   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.216521   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:08.716334   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.216504   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:09.716904   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.216483   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:07.435274   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.29599961s)
	I0722 11:52:07.435305   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 11:52:07.435331   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:07.435368   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 11:52:08.882569   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447179999s)
	I0722 11:52:08.882593   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 11:52:08.882621   58921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:08.882670   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 11:52:06.857393   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:09.357742   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:11.845980   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:13.846559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:10.717066   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.216328   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:11.717249   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.216579   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:12.716697   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.217042   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:13.717186   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.216301   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:14.716510   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.216925   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:10.861616   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978918937s)
	I0722 11:52:10.861646   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 11:52:10.861670   58921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:10.861717   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 11:52:11.517096   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 11:52:11.517126   58921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:11.517179   58921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 11:52:13.588498   58921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.071290819s)
	I0722 11:52:13.588531   58921 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19313-5960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 11:52:13.588567   58921 cache_images.go:123] Successfully loaded all cached images
	I0722 11:52:13.588580   58921 cache_images.go:92] duration metric: took 14.539397599s to LoadCachedImages
	I0722 11:52:13.588591   58921 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.31.0-beta.0 crio true true} ...
	I0722 11:52:13.588728   58921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-339929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 11:52:13.588806   58921 ssh_runner.go:195] Run: crio config
	I0722 11:52:13.641949   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:13.641969   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:13.641978   58921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 11:52:13.641997   58921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-339929 NodeName:no-preload-339929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 11:52:13.642187   58921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-339929"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 11:52:13.642258   58921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 11:52:13.653174   58921 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 11:52:13.653244   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 11:52:13.662655   58921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 11:52:13.678906   58921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 11:52:13.699269   58921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 11:52:13.718873   58921 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I0722 11:52:13.722962   58921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 11:52:13.736241   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:52:13.858093   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:52:13.875377   58921 certs.go:68] Setting up /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929 for IP: 192.168.61.112
	I0722 11:52:13.875402   58921 certs.go:194] generating shared ca certs ...
	I0722 11:52:13.875421   58921 certs.go:226] acquiring lock for ca certs: {Name:mkd084ec5fec65793ddf74f7d182bb8e425e2073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:52:13.875588   58921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key
	I0722 11:52:13.875664   58921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key
	I0722 11:52:13.875677   58921 certs.go:256] generating profile certs ...
	I0722 11:52:13.875785   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.key
	I0722 11:52:13.875857   58921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key.26403d20
	I0722 11:52:13.875895   58921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key
	I0722 11:52:13.875998   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem (1338 bytes)
	W0722 11:52:13.876025   58921 certs.go:480] ignoring /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098_empty.pem, impossibly tiny 0 bytes
	I0722 11:52:13.876036   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 11:52:13.876057   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/ca.pem (1082 bytes)
	I0722 11:52:13.876079   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/cert.pem (1123 bytes)
	I0722 11:52:13.876100   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/certs/key.pem (1679 bytes)
	I0722 11:52:13.876139   58921 certs.go:484] found cert: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem (1708 bytes)
	I0722 11:52:13.876804   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 11:52:13.923607   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0722 11:52:13.952785   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 11:52:13.983113   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 11:52:14.012712   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 11:52:14.047958   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 11:52:14.077411   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 11:52:14.100978   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 11:52:14.123416   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/ssl/certs/130982.pem --> /usr/share/ca-certificates/130982.pem (1708 bytes)
	I0722 11:52:14.145662   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 11:52:14.169188   58921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19313-5960/.minikube/certs/13098.pem --> /usr/share/ca-certificates/13098.pem (1338 bytes)
	I0722 11:52:14.194650   58921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 11:52:14.212538   58921 ssh_runner.go:195] Run: openssl version
	I0722 11:52:14.218725   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130982.pem && ln -fs /usr/share/ca-certificates/130982.pem /etc/ssl/certs/130982.pem"
	I0722 11:52:14.231079   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235652   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:41 /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.235695   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130982.pem
	I0722 11:52:14.241643   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130982.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 11:52:14.252681   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 11:52:14.263166   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267588   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.267629   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 11:52:14.273182   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 11:52:14.284087   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13098.pem && ln -fs /usr/share/ca-certificates/13098.pem /etc/ssl/certs/13098.pem"
	I0722 11:52:14.294571   58921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298824   58921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:41 /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.298870   58921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13098.pem
	I0722 11:52:14.304464   58921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13098.pem /etc/ssl/certs/51391683.0"
	I0722 11:52:14.315110   58921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 11:52:14.319444   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 11:52:14.325221   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 11:52:14.330923   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 11:52:14.336509   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 11:52:14.342749   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 11:52:14.348854   58921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 11:52:14.355682   58921 kubeadm.go:392] StartCluster: {Name:no-preload-339929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-339929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 11:52:14.355818   58921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 11:52:14.355867   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.395279   58921 cri.go:89] found id: ""
	I0722 11:52:14.395351   58921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 11:52:14.406738   58921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 11:52:14.406755   58921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 11:52:14.406793   58921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 11:52:14.417161   58921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:52:14.418468   58921 kubeconfig.go:125] found "no-preload-339929" server: "https://192.168.61.112:8443"
	I0722 11:52:14.420764   58921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 11:52:14.430722   58921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I0722 11:52:14.430749   58921 kubeadm.go:1160] stopping kube-system containers ...
	I0722 11:52:14.430760   58921 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 11:52:14.430809   58921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 11:52:14.472164   58921 cri.go:89] found id: ""
	I0722 11:52:14.472228   58921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 11:52:14.489758   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:52:14.499830   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:52:14.499878   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:52:14.499932   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:52:14.508977   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:52:14.509024   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:52:14.518199   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:52:14.527136   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:52:14.527182   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:52:14.536182   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.545425   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:52:14.545482   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:52:14.554843   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:52:14.563681   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:52:14.563722   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:52:14.572855   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:52:14.582257   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:14.691452   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.383530   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:11.857298   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:14.357114   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:16.347252   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.846603   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:15.716962   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.216373   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.716871   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.217108   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:17.716670   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.216503   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:18.717214   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.216481   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:19.716922   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:20.216618   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:15.600861   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.661719   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:15.756150   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:52:15.756243   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.256571   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.756636   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:16.788487   58921 api_server.go:72] duration metric: took 1.032338614s to wait for apiserver process to appear ...
	I0722 11:52:16.788511   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:52:16.788538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:16.789057   58921 api_server.go:269] stopped: https://192.168.61.112:8443/healthz: Get "https://192.168.61.112:8443/healthz": dial tcp 192.168.61.112:8443: connect: connection refused
	I0722 11:52:17.289531   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.643492   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.643522   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.643538   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.712047   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 11:52:19.712087   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 11:52:19.789319   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:19.903924   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:19.903964   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:20.289484   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.294499   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.294532   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:16.357488   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:18.857066   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.789245   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:20.795813   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 11:52:20.795846   58921 api_server.go:103] status: https://192.168.61.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 11:52:21.289564   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:52:21.294121   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:52:21.300616   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:52:21.300644   58921 api_server.go:131] duration metric: took 4.512126962s to wait for apiserver health ...
	I0722 11:52:21.300652   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:52:21.300661   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:52:21.302460   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:52:21.347296   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.848716   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:20.717047   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.216924   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.716824   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.216907   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:22.716538   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.216351   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:23.716755   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.216816   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:24.717065   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:25.216949   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:21.303690   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:52:21.315042   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:52:21.336417   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:52:21.347183   58921 system_pods.go:59] 8 kube-system pods found
	I0722 11:52:21.347225   58921 system_pods.go:61] "coredns-5cfdc65f69-v5qdv" [2321209d-652c-45c1-8d0a-b4ad58f60a25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 11:52:21.347238   58921 system_pods.go:61] "etcd-no-preload-339929" [9dbeed49-0d34-4643-8a7c-28b9b8b60b00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 11:52:21.347248   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [f9675e86-589e-4c6c-b4b5-627e2192b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 11:52:21.347259   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [5033e74b-5a1c-4044-aadf-67d5e44b17c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 11:52:21.347265   58921 system_pods.go:61] "kube-proxy-78tx8" [13f226f0-8837-44d2-aa74-a7db43c73651] Running
	I0722 11:52:21.347276   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bf82937c-c95c-4961-afca-60dfe128b6bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 11:52:21.347288   58921 system_pods.go:61] "metrics-server-78fcd8795b-2lbrr" [1eab4084-3ddf-44f3-9761-130a6f137ea6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:52:21.347294   58921 system_pods.go:61] "storage-provisioner" [66323714-b119-4680-91a3-2e2142e523b4] Running
	I0722 11:52:21.347308   58921 system_pods.go:74] duration metric: took 10.869226ms to wait for pod list to return data ...
	I0722 11:52:21.347316   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:52:21.351215   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:52:21.351242   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:52:21.351254   58921 node_conditions.go:105] duration metric: took 3.932625ms to run NodePressure ...
	I0722 11:52:21.351273   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 11:52:21.620524   58921 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625517   58921 kubeadm.go:739] kubelet initialised
	I0722 11:52:21.625540   58921 kubeadm.go:740] duration metric: took 4.987123ms waiting for restarted kubelet to initialise ...
	I0722 11:52:21.625550   58921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:52:21.630823   58921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:23.639602   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.140079   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:25.140103   58921 pod_ready.go:81] duration metric: took 3.509258556s for pod "coredns-5cfdc65f69-v5qdv" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:25.140112   58921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:20.860912   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:23.356763   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.357406   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:26.345970   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.347288   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:25.716863   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:26.217017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:26.217108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:26.259154   59674 cri.go:89] found id: ""
	I0722 11:52:26.259183   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.259193   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:26.259201   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:26.259260   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:26.292777   59674 cri.go:89] found id: ""
	I0722 11:52:26.292801   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.292807   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:26.292813   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:26.292858   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:26.327874   59674 cri.go:89] found id: ""
	I0722 11:52:26.327899   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.327907   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:26.327913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:26.327960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:26.372370   59674 cri.go:89] found id: ""
	I0722 11:52:26.372405   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.372415   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:26.372421   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:26.372468   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:26.406270   59674 cri.go:89] found id: ""
	I0722 11:52:26.406294   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.406301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:26.406306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:26.406355   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:26.441204   59674 cri.go:89] found id: ""
	I0722 11:52:26.441230   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.441237   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:26.441242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:26.441302   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:26.476132   59674 cri.go:89] found id: ""
	I0722 11:52:26.476162   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.476174   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:26.476180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:26.476236   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:26.509534   59674 cri.go:89] found id: ""
	I0722 11:52:26.509565   59674 logs.go:276] 0 containers: []
	W0722 11:52:26.509576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:26.509588   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:26.509601   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:26.564002   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:26.564030   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:26.578619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:26.578650   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:26.706713   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:26.706738   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:26.706752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:26.772168   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:26.772201   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:29.313944   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:29.328002   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:29.328076   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:29.367128   59674 cri.go:89] found id: ""
	I0722 11:52:29.367157   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.367166   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:29.367173   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:29.367244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:29.401552   59674 cri.go:89] found id: ""
	I0722 11:52:29.401581   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.401592   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:29.401599   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:29.401677   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:29.433892   59674 cri.go:89] found id: ""
	I0722 11:52:29.433919   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.433931   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:29.433943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:29.433993   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:29.469619   59674 cri.go:89] found id: ""
	I0722 11:52:29.469649   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.469660   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:29.469667   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:29.469726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:29.504771   59674 cri.go:89] found id: ""
	I0722 11:52:29.504795   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.504805   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:29.504811   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:29.504871   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:29.538861   59674 cri.go:89] found id: ""
	I0722 11:52:29.538890   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.538900   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:29.538912   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:29.538975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:29.593633   59674 cri.go:89] found id: ""
	I0722 11:52:29.593669   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.593680   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:29.593688   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:29.593747   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:29.638605   59674 cri.go:89] found id: ""
	I0722 11:52:29.638636   59674 logs.go:276] 0 containers: []
	W0722 11:52:29.638645   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:29.638653   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:29.638664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:29.691633   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:29.691662   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:29.707277   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:29.707305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:29.785616   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:29.785638   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:29.785669   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:29.857487   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:29.857517   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:27.146649   58921 pod_ready.go:102] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:28.646058   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:28.646083   58921 pod_ready.go:81] duration metric: took 3.505964852s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:28.646092   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:27.855581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:29.856605   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:30.847291   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.847946   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.398141   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:32.411380   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:32.411453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:32.445857   59674 cri.go:89] found id: ""
	I0722 11:52:32.445882   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.445889   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:32.445895   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:32.445946   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:32.478146   59674 cri.go:89] found id: ""
	I0722 11:52:32.478180   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.478190   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:32.478197   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:32.478268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:32.511110   59674 cri.go:89] found id: ""
	I0722 11:52:32.511138   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.511147   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:32.511161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:32.511216   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:32.545388   59674 cri.go:89] found id: ""
	I0722 11:52:32.545415   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.545425   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:32.545432   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:32.545489   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:32.579097   59674 cri.go:89] found id: ""
	I0722 11:52:32.579125   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.579135   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:32.579141   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:32.579205   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:32.615302   59674 cri.go:89] found id: ""
	I0722 11:52:32.615333   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.615343   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:32.615350   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:32.615407   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:32.654527   59674 cri.go:89] found id: ""
	I0722 11:52:32.654552   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.654562   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:32.654568   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:32.654625   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:32.689409   59674 cri.go:89] found id: ""
	I0722 11:52:32.689437   59674 logs.go:276] 0 containers: []
	W0722 11:52:32.689445   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:32.689454   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:32.689470   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:32.740478   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:32.740511   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:32.754266   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:32.754299   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:32.824441   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:32.824461   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:32.824475   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:32.896752   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:32.896781   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:30.652706   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:32.653310   58921 pod_ready.go:102] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.154169   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.154195   58921 pod_ready.go:81] duration metric: took 6.508095973s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.154207   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160406   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.160429   58921 pod_ready.go:81] duration metric: took 6.213375ms for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.160440   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166358   58921 pod_ready.go:92] pod "kube-proxy-78tx8" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.166377   58921 pod_ready.go:81] duration metric: took 5.930051ms for pod "kube-proxy-78tx8" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.166387   58921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170508   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:52:35.170528   58921 pod_ready.go:81] duration metric: took 4.133521ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:35.170538   58921 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	I0722 11:52:32.355967   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:34.358106   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.346579   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:37.346671   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.346974   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:35.438478   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:35.454105   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:35.454175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:35.493287   59674 cri.go:89] found id: ""
	I0722 11:52:35.493319   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.493330   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:35.493337   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:35.493396   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:35.528035   59674 cri.go:89] found id: ""
	I0722 11:52:35.528060   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.528066   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:35.528072   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:35.528126   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:35.586153   59674 cri.go:89] found id: ""
	I0722 11:52:35.586199   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.586213   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:35.586220   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:35.586283   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:35.630371   59674 cri.go:89] found id: ""
	I0722 11:52:35.630405   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.630416   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:35.630425   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:35.630499   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:35.667593   59674 cri.go:89] found id: ""
	I0722 11:52:35.667621   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.667629   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:35.667635   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:35.667682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:35.706933   59674 cri.go:89] found id: ""
	I0722 11:52:35.706964   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.706973   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:35.706981   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:35.707040   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:35.743174   59674 cri.go:89] found id: ""
	I0722 11:52:35.743205   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.743215   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:35.743223   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:35.743289   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:35.784450   59674 cri.go:89] found id: ""
	I0722 11:52:35.784478   59674 logs.go:276] 0 containers: []
	W0722 11:52:35.784487   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:35.784497   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:35.784508   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:35.840326   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:35.840357   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:35.856432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:35.856471   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:35.932273   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:35.932298   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:35.932313   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:36.010376   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:36.010420   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:38.552982   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:38.566817   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:38.566895   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:38.601313   59674 cri.go:89] found id: ""
	I0722 11:52:38.601356   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.601371   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:38.601381   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:38.601459   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:38.637303   59674 cri.go:89] found id: ""
	I0722 11:52:38.637331   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.637341   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:38.637352   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:38.637413   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:38.672840   59674 cri.go:89] found id: ""
	I0722 11:52:38.672871   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.672883   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:38.672894   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:38.672986   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:38.709375   59674 cri.go:89] found id: ""
	I0722 11:52:38.709402   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.709413   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:38.709420   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:38.709473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:38.744060   59674 cri.go:89] found id: ""
	I0722 11:52:38.744084   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.744094   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:38.744100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:38.744161   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:38.778322   59674 cri.go:89] found id: ""
	I0722 11:52:38.778350   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.778361   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:38.778368   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:38.778427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:38.811803   59674 cri.go:89] found id: ""
	I0722 11:52:38.811830   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.811840   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:38.811847   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:38.811902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:38.843935   59674 cri.go:89] found id: ""
	I0722 11:52:38.843959   59674 logs.go:276] 0 containers: []
	W0722 11:52:38.843975   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:38.843985   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:38.843999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:38.912613   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:38.912639   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:38.912654   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:39.001924   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:39.001964   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:39.041645   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:39.041684   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:39.093322   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:39.093354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:37.177516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:39.675985   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:36.856164   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:38.858983   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.847112   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:44.346271   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.606698   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:41.619758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:41.619815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:41.657432   59674 cri.go:89] found id: ""
	I0722 11:52:41.657458   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.657469   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:41.657476   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:41.657536   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:41.695136   59674 cri.go:89] found id: ""
	I0722 11:52:41.695169   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.695177   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:41.695183   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:41.695243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:41.735595   59674 cri.go:89] found id: ""
	I0722 11:52:41.735621   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.735641   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:41.735648   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:41.735710   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:41.770398   59674 cri.go:89] found id: ""
	I0722 11:52:41.770428   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.770438   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:41.770445   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:41.770554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:41.808250   59674 cri.go:89] found id: ""
	I0722 11:52:41.808277   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.808285   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:41.808290   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:41.808349   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:41.843494   59674 cri.go:89] found id: ""
	I0722 11:52:41.843524   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.843536   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:41.843543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:41.843611   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:41.882916   59674 cri.go:89] found id: ""
	I0722 11:52:41.882941   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.882949   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:41.882954   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:41.883011   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:41.916503   59674 cri.go:89] found id: ""
	I0722 11:52:41.916527   59674 logs.go:276] 0 containers: []
	W0722 11:52:41.916538   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:41.916549   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:41.916564   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.966989   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:41.967023   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:42.021676   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:42.021716   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:42.054625   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:42.054655   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:42.122425   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:42.122449   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:42.122463   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:44.699097   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:44.713759   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:44.713815   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:44.752668   59674 cri.go:89] found id: ""
	I0722 11:52:44.752698   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.752709   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:44.752716   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:44.752778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:44.793550   59674 cri.go:89] found id: ""
	I0722 11:52:44.793575   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.793587   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:44.793594   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:44.793665   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:44.833860   59674 cri.go:89] found id: ""
	I0722 11:52:44.833882   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.833890   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:44.833903   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:44.833952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:44.873847   59674 cri.go:89] found id: ""
	I0722 11:52:44.873880   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.873898   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:44.873910   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:44.873957   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:44.907843   59674 cri.go:89] found id: ""
	I0722 11:52:44.907867   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.907877   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:44.907884   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:44.907937   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:44.942998   59674 cri.go:89] found id: ""
	I0722 11:52:44.943026   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.943034   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:44.943040   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:44.943093   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:44.981145   59674 cri.go:89] found id: ""
	I0722 11:52:44.981173   59674 logs.go:276] 0 containers: []
	W0722 11:52:44.981183   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:44.981190   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:44.981252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:45.018542   59674 cri.go:89] found id: ""
	I0722 11:52:45.018568   59674 logs.go:276] 0 containers: []
	W0722 11:52:45.018576   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:45.018585   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:45.018599   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:45.069480   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:45.069510   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:45.083323   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:45.083347   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:45.149976   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.149996   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:45.150008   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:45.230617   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:45.230649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:41.677474   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.678565   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:41.357194   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:43.856753   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:46.346339   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.846643   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:47.770384   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:47.793582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:47.793654   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:47.837187   59674 cri.go:89] found id: ""
	I0722 11:52:47.837215   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.837224   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:47.837232   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:47.837290   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:47.874295   59674 cri.go:89] found id: ""
	I0722 11:52:47.874325   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.874336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:47.874345   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:47.874414   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:47.915782   59674 cri.go:89] found id: ""
	I0722 11:52:47.915812   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.915823   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:47.915830   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:47.915886   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:47.956624   59674 cri.go:89] found id: ""
	I0722 11:52:47.956653   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.956663   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:47.956670   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:47.956731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:47.996237   59674 cri.go:89] found id: ""
	I0722 11:52:47.996264   59674 logs.go:276] 0 containers: []
	W0722 11:52:47.996272   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:47.996277   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:47.996335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:48.032022   59674 cri.go:89] found id: ""
	I0722 11:52:48.032046   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.032058   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:48.032066   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:48.032117   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:48.066218   59674 cri.go:89] found id: ""
	I0722 11:52:48.066248   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.066259   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:48.066265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:48.066316   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:48.099781   59674 cri.go:89] found id: ""
	I0722 11:52:48.099803   59674 logs.go:276] 0 containers: []
	W0722 11:52:48.099810   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:48.099818   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:48.099827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:48.174488   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:48.174528   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:48.215029   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:48.215068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:48.268819   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:48.268850   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:48.283307   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:48.283335   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:48.356491   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:45.678697   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.179684   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:45.857970   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:48.357330   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.357469   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.846976   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.847954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:50.857172   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:50.871178   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:50.871244   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:50.907166   59674 cri.go:89] found id: ""
	I0722 11:52:50.907190   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.907197   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:50.907203   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:50.907256   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:50.942929   59674 cri.go:89] found id: ""
	I0722 11:52:50.942958   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.942969   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:50.942976   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:50.943041   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:50.982323   59674 cri.go:89] found id: ""
	I0722 11:52:50.982355   59674 logs.go:276] 0 containers: []
	W0722 11:52:50.982367   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:50.982373   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:50.982436   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:51.016557   59674 cri.go:89] found id: ""
	I0722 11:52:51.016586   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.016597   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:51.016604   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:51.016662   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:51.051811   59674 cri.go:89] found id: ""
	I0722 11:52:51.051844   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.051855   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:51.051863   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:51.051923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:51.088147   59674 cri.go:89] found id: ""
	I0722 11:52:51.088177   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.088189   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:51.088197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:51.088257   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:51.126795   59674 cri.go:89] found id: ""
	I0722 11:52:51.126827   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.126838   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:51.126845   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:51.126909   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:51.165508   59674 cri.go:89] found id: ""
	I0722 11:52:51.165539   59674 logs.go:276] 0 containers: []
	W0722 11:52:51.165550   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:51.165562   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:51.165575   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:51.245014   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:51.245040   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:51.245055   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:51.335845   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:51.335893   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:51.375806   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:51.375837   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:51.430241   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:51.430270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:53.944572   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:53.957805   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:53.957899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:53.997116   59674 cri.go:89] found id: ""
	I0722 11:52:53.997144   59674 logs.go:276] 0 containers: []
	W0722 11:52:53.997154   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:53.997161   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:53.997222   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:54.033518   59674 cri.go:89] found id: ""
	I0722 11:52:54.033544   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.033553   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:54.033560   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:54.033626   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:54.071083   59674 cri.go:89] found id: ""
	I0722 11:52:54.071108   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.071119   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:54.071127   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:54.071194   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:54.107834   59674 cri.go:89] found id: ""
	I0722 11:52:54.107860   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.107868   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:54.107873   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:54.107929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:54.141825   59674 cri.go:89] found id: ""
	I0722 11:52:54.141850   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.141858   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:54.141865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:54.141925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:54.174297   59674 cri.go:89] found id: ""
	I0722 11:52:54.174323   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.174333   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:54.174341   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:54.174403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:54.206781   59674 cri.go:89] found id: ""
	I0722 11:52:54.206803   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.206811   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:54.206816   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:54.206861   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:54.239180   59674 cri.go:89] found id: ""
	I0722 11:52:54.239204   59674 logs.go:276] 0 containers: []
	W0722 11:52:54.239212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:54.239223   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:54.239237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:54.307317   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:54.307345   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:54.307360   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:54.392334   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:54.392368   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:52:54.435129   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:54.435168   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:54.495428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:54.495456   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:50.676790   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.678046   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.177430   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:52.357839   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:54.856859   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:55.346866   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.845527   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.009559   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:52:57.024145   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:52:57.024215   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:52:57.063027   59674 cri.go:89] found id: ""
	I0722 11:52:57.063053   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.063060   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:52:57.063066   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:52:57.063133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:52:57.095940   59674 cri.go:89] found id: ""
	I0722 11:52:57.095961   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.095968   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:52:57.095973   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:52:57.096018   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:52:57.129931   59674 cri.go:89] found id: ""
	I0722 11:52:57.129952   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.129960   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:52:57.129965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:52:57.130009   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:52:57.164643   59674 cri.go:89] found id: ""
	I0722 11:52:57.164672   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.164683   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:52:57.164691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:52:57.164744   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:52:57.201411   59674 cri.go:89] found id: ""
	I0722 11:52:57.201440   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.201451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:52:57.201458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:52:57.201523   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:52:57.235816   59674 cri.go:89] found id: ""
	I0722 11:52:57.235838   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.235848   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:52:57.235854   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:52:57.235913   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:52:57.273896   59674 cri.go:89] found id: ""
	I0722 11:52:57.273925   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.273936   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:52:57.273943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:52:57.273997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:52:57.312577   59674 cri.go:89] found id: ""
	I0722 11:52:57.312602   59674 logs.go:276] 0 containers: []
	W0722 11:52:57.312610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:52:57.312618   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:52:57.312636   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.366529   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:52:57.366558   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:52:57.380829   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:52:57.380854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:52:57.450855   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:52:57.450875   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:52:57.450889   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:52:57.531450   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:52:57.531480   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:00.071642   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:00.085199   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:00.085264   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:00.123418   59674 cri.go:89] found id: ""
	I0722 11:53:00.123439   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.123446   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:00.123451   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:00.123510   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:00.157005   59674 cri.go:89] found id: ""
	I0722 11:53:00.157032   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.157042   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:00.157049   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:00.157108   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:00.196244   59674 cri.go:89] found id: ""
	I0722 11:53:00.196272   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.196281   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:00.196286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:00.196335   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:00.233010   59674 cri.go:89] found id: ""
	I0722 11:53:00.233039   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.233049   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:00.233056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:00.233112   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:00.268154   59674 cri.go:89] found id: ""
	I0722 11:53:00.268179   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.268187   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:00.268192   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:00.268250   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:00.304159   59674 cri.go:89] found id: ""
	I0722 11:53:00.304184   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.304194   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:00.304201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:00.304268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:00.336853   59674 cri.go:89] found id: ""
	I0722 11:53:00.336883   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.336893   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:00.336899   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:00.336960   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:00.370921   59674 cri.go:89] found id: ""
	I0722 11:53:00.370943   59674 logs.go:276] 0 containers: []
	W0722 11:53:00.370953   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:00.370963   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:00.370979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:52:57.177913   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.677194   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:57.356163   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:52:59.357042   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.347125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:02.846531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:00.422367   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:00.422399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:00.437915   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:00.437947   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:00.512663   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:00.512689   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:00.512700   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:00.595147   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:00.595189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.135150   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:03.148079   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:03.148151   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:03.182278   59674 cri.go:89] found id: ""
	I0722 11:53:03.182308   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.182318   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:03.182327   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:03.182409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:03.220570   59674 cri.go:89] found id: ""
	I0722 11:53:03.220599   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.220607   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:03.220613   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:03.220671   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:03.255917   59674 cri.go:89] found id: ""
	I0722 11:53:03.255940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.255950   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:03.255957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:03.256020   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:03.290857   59674 cri.go:89] found id: ""
	I0722 11:53:03.290885   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.290895   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:03.290902   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:03.290959   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:03.326917   59674 cri.go:89] found id: ""
	I0722 11:53:03.326940   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.326951   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:03.326958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:03.327016   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:03.363787   59674 cri.go:89] found id: ""
	I0722 11:53:03.363809   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.363818   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:03.363825   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:03.363881   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:03.397453   59674 cri.go:89] found id: ""
	I0722 11:53:03.397479   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.397489   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:03.397496   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:03.397554   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:03.429984   59674 cri.go:89] found id: ""
	I0722 11:53:03.430012   59674 logs.go:276] 0 containers: []
	W0722 11:53:03.430020   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:03.430037   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:03.430054   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:03.509273   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:03.509305   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:03.555522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:03.555552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:03.607361   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:03.607389   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:03.622731   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:03.622752   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:03.699844   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:02.176754   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.180602   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:01.856868   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:04.356343   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:05.346023   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:07.846190   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.200053   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:06.213571   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:06.213628   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:06.249320   59674 cri.go:89] found id: ""
	I0722 11:53:06.249348   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.249359   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:06.249366   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:06.249426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:06.283378   59674 cri.go:89] found id: ""
	I0722 11:53:06.283405   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.283415   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:06.283422   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:06.283482   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:06.319519   59674 cri.go:89] found id: ""
	I0722 11:53:06.319540   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.319548   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:06.319553   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:06.319606   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:06.352263   59674 cri.go:89] found id: ""
	I0722 11:53:06.352289   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.352298   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:06.352310   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:06.352370   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:06.388262   59674 cri.go:89] found id: ""
	I0722 11:53:06.388285   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.388292   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:06.388297   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:06.388348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:06.427487   59674 cri.go:89] found id: ""
	I0722 11:53:06.427519   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.427529   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:06.427537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:06.427592   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:06.462567   59674 cri.go:89] found id: ""
	I0722 11:53:06.462597   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.462610   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:06.462618   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:06.462674   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:06.496880   59674 cri.go:89] found id: ""
	I0722 11:53:06.496904   59674 logs.go:276] 0 containers: []
	W0722 11:53:06.496911   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:06.496920   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:06.496929   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.549225   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:06.549262   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:06.564780   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:06.564808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:06.632152   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:06.632177   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:06.632196   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:06.706909   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:06.706948   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.246773   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:09.260605   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:09.260673   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:09.294685   59674 cri.go:89] found id: ""
	I0722 11:53:09.294707   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.294718   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:09.294726   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:09.294787   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:09.331109   59674 cri.go:89] found id: ""
	I0722 11:53:09.331140   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.331148   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:09.331153   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:09.331208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:09.366873   59674 cri.go:89] found id: ""
	I0722 11:53:09.366901   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.366911   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:09.366928   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:09.366980   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:09.399614   59674 cri.go:89] found id: ""
	I0722 11:53:09.399642   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.399649   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:09.399655   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:09.399708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:09.434326   59674 cri.go:89] found id: ""
	I0722 11:53:09.434359   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.434369   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:09.434375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:09.434437   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:09.468911   59674 cri.go:89] found id: ""
	I0722 11:53:09.468942   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.468953   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:09.468961   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:09.469021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:09.510003   59674 cri.go:89] found id: ""
	I0722 11:53:09.510031   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.510042   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:09.510048   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:09.510101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:09.545074   59674 cri.go:89] found id: ""
	I0722 11:53:09.545103   59674 logs.go:276] 0 containers: []
	W0722 11:53:09.545113   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:09.545123   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:09.545148   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:09.559370   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:09.559399   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:09.632039   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:09.632064   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:09.632083   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:09.711851   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:09.711881   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:09.751872   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:09.751898   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:06.678310   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.176261   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:06.358444   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:08.858131   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:09.846552   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.347071   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:12.302294   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:12.315638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:12.315708   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:12.349556   59674 cri.go:89] found id: ""
	I0722 11:53:12.349579   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.349588   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:12.349595   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:12.349651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:12.387443   59674 cri.go:89] found id: ""
	I0722 11:53:12.387470   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.387483   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:12.387488   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:12.387541   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:12.422676   59674 cri.go:89] found id: ""
	I0722 11:53:12.422704   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.422714   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:12.422720   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:12.422781   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:12.457069   59674 cri.go:89] found id: ""
	I0722 11:53:12.457099   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.457111   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:12.457117   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:12.457175   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:12.492498   59674 cri.go:89] found id: ""
	I0722 11:53:12.492526   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.492536   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:12.492543   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:12.492603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:12.529015   59674 cri.go:89] found id: ""
	I0722 11:53:12.529046   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.529056   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:12.529063   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:12.529122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:12.564325   59674 cri.go:89] found id: ""
	I0722 11:53:12.564353   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.564363   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:12.564371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:12.564441   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:12.603232   59674 cri.go:89] found id: ""
	I0722 11:53:12.603257   59674 logs.go:276] 0 containers: []
	W0722 11:53:12.603269   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:12.603278   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:12.603289   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:12.689901   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:12.689933   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:12.729780   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:12.729808   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:12.778899   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:12.778928   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:12.792619   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:12.792649   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:12.860293   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.361321   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:15.375062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:15.375125   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:15.409072   59674 cri.go:89] found id: ""
	I0722 11:53:15.409096   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.409104   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:15.409109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:15.409163   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:11.176321   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.176728   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.176983   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:11.356441   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:13.356690   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:14.846984   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:17.346182   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.346559   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.447004   59674 cri.go:89] found id: ""
	I0722 11:53:15.447026   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.447033   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:15.447039   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:15.447096   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:15.480783   59674 cri.go:89] found id: ""
	I0722 11:53:15.480811   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.480822   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:15.480829   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:15.480906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:15.520672   59674 cri.go:89] found id: ""
	I0722 11:53:15.520701   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.520713   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:15.520721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:15.520777   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:15.557886   59674 cri.go:89] found id: ""
	I0722 11:53:15.557916   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.557926   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:15.557933   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:15.557994   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:15.593517   59674 cri.go:89] found id: ""
	I0722 11:53:15.593545   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.593555   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:15.593561   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:15.593619   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:15.628205   59674 cri.go:89] found id: ""
	I0722 11:53:15.628235   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.628246   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:15.628253   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:15.628314   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:15.664239   59674 cri.go:89] found id: ""
	I0722 11:53:15.664265   59674 logs.go:276] 0 containers: []
	W0722 11:53:15.664276   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:15.664287   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:15.664300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:15.714246   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:15.714281   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:15.728467   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:15.728490   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:15.813299   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:15.813323   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:15.813339   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:15.899949   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:15.899984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:18.443394   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:18.457499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:18.457555   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:18.489712   59674 cri.go:89] found id: ""
	I0722 11:53:18.489735   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.489745   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:18.489752   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:18.489812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:18.524947   59674 cri.go:89] found id: ""
	I0722 11:53:18.524973   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.524982   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:18.524989   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:18.525045   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:18.560325   59674 cri.go:89] found id: ""
	I0722 11:53:18.560350   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.560361   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:18.560367   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:18.560439   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:18.594221   59674 cri.go:89] found id: ""
	I0722 11:53:18.594247   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.594255   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:18.594265   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:18.594322   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:18.630809   59674 cri.go:89] found id: ""
	I0722 11:53:18.630839   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.630850   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:18.630857   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:18.630917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:18.666051   59674 cri.go:89] found id: ""
	I0722 11:53:18.666078   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.666089   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:18.666100   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:18.666159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:18.703337   59674 cri.go:89] found id: ""
	I0722 11:53:18.703362   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.703370   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:18.703375   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:18.703435   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:18.738960   59674 cri.go:89] found id: ""
	I0722 11:53:18.738990   59674 logs.go:276] 0 containers: []
	W0722 11:53:18.738999   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:18.739008   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:18.739022   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:18.788130   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:18.788163   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:18.802219   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:18.802249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:18.869568   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:18.869586   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:18.869597   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:18.947223   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:18.947256   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:17.177247   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:19.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:15.857320   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:18.356290   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:20.356364   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.346698   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:23.846749   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:21.487936   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:21.501337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:21.501421   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:21.537649   59674 cri.go:89] found id: ""
	I0722 11:53:21.537674   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.537681   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:21.537686   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:21.537746   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:21.583693   59674 cri.go:89] found id: ""
	I0722 11:53:21.583728   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.583738   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:21.583745   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:21.583803   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:21.621690   59674 cri.go:89] found id: ""
	I0722 11:53:21.621714   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.621722   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:21.621728   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:21.621773   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:21.657855   59674 cri.go:89] found id: ""
	I0722 11:53:21.657878   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.657885   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:21.657891   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:21.657953   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:21.695025   59674 cri.go:89] found id: ""
	I0722 11:53:21.695051   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.695059   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:21.695065   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:21.695113   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:21.730108   59674 cri.go:89] found id: ""
	I0722 11:53:21.730138   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.730146   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:21.730151   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:21.730208   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:21.763943   59674 cri.go:89] found id: ""
	I0722 11:53:21.763972   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.763980   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:21.763985   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:21.764030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:21.801227   59674 cri.go:89] found id: ""
	I0722 11:53:21.801251   59674 logs.go:276] 0 containers: []
	W0722 11:53:21.801259   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:21.801270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:21.801283   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:21.851428   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:21.851457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:21.867798   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:21.867827   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:21.945577   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:21.945599   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:21.945612   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:22.028796   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:22.028839   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:24.577167   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:24.589859   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:24.589917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:24.623952   59674 cri.go:89] found id: ""
	I0722 11:53:24.623985   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.623997   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:24.624003   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:24.624065   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:24.658881   59674 cri.go:89] found id: ""
	I0722 11:53:24.658910   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.658919   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:24.658925   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:24.658973   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:24.694551   59674 cri.go:89] found id: ""
	I0722 11:53:24.694574   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.694584   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:24.694590   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:24.694634   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:24.728952   59674 cri.go:89] found id: ""
	I0722 11:53:24.728980   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.728990   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:24.728999   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:24.729061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:24.764562   59674 cri.go:89] found id: ""
	I0722 11:53:24.764584   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.764592   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:24.764597   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:24.764643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:24.804184   59674 cri.go:89] found id: ""
	I0722 11:53:24.804209   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.804219   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:24.804226   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:24.804277   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:24.841870   59674 cri.go:89] found id: ""
	I0722 11:53:24.841896   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.841906   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:24.841913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:24.841967   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:24.876174   59674 cri.go:89] found id: ""
	I0722 11:53:24.876201   59674 logs.go:276] 0 containers: []
	W0722 11:53:24.876210   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:24.876220   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:24.876234   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:24.928405   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:24.928434   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:24.942443   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:24.942472   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:25.010281   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:25.010304   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:25.010318   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:25.091493   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:25.091525   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:22.176013   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.177414   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:22.356642   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:24.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.346061   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:28.346192   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:27.630939   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:27.644250   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:27.644324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:27.686356   59674 cri.go:89] found id: ""
	I0722 11:53:27.686381   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.686391   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:27.686404   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:27.686483   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:27.719105   59674 cri.go:89] found id: ""
	I0722 11:53:27.719133   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.719143   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:27.719149   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:27.719210   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:27.755476   59674 cri.go:89] found id: ""
	I0722 11:53:27.755505   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.755514   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:27.755520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:27.755570   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:27.789936   59674 cri.go:89] found id: ""
	I0722 11:53:27.789963   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.789971   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:27.789977   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:27.790023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:27.824246   59674 cri.go:89] found id: ""
	I0722 11:53:27.824273   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.824280   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:27.824286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:27.824332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:27.860081   59674 cri.go:89] found id: ""
	I0722 11:53:27.860107   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.860114   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:27.860120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:27.860172   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:27.895705   59674 cri.go:89] found id: ""
	I0722 11:53:27.895732   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.895741   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:27.895748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:27.895801   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:27.930750   59674 cri.go:89] found id: ""
	I0722 11:53:27.930774   59674 logs.go:276] 0 containers: []
	W0722 11:53:27.930781   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:27.930790   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:27.930802   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:28.025545   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:28.025567   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:28.025578   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:28.111194   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:28.111227   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:28.154270   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:28.154300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:28.205822   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:28.205854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:26.677054   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.178063   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:26.856858   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:29.356840   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.346338   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:32.346478   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:30.720468   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:30.733753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:30.733806   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:30.771774   59674 cri.go:89] found id: ""
	I0722 11:53:30.771803   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.771810   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:30.771816   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:30.771876   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:30.810499   59674 cri.go:89] found id: ""
	I0722 11:53:30.810526   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.810537   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:30.810543   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:30.810608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:30.846824   59674 cri.go:89] found id: ""
	I0722 11:53:30.846854   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.846865   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:30.846872   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:30.846929   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:30.882372   59674 cri.go:89] found id: ""
	I0722 11:53:30.882399   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.882408   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:30.882415   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:30.882462   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:30.916152   59674 cri.go:89] found id: ""
	I0722 11:53:30.916186   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.916201   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:30.916209   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:30.916281   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:30.950442   59674 cri.go:89] found id: ""
	I0722 11:53:30.950466   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.950475   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:30.950482   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:30.950537   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:30.988328   59674 cri.go:89] found id: ""
	I0722 11:53:30.988355   59674 logs.go:276] 0 containers: []
	W0722 11:53:30.988367   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:30.988374   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:30.988452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:31.024500   59674 cri.go:89] found id: ""
	I0722 11:53:31.024531   59674 logs.go:276] 0 containers: []
	W0722 11:53:31.024542   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:31.024552   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:31.024565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:31.078276   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:31.078306   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.093640   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:31.093665   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:31.161107   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:31.161131   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:31.161145   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:31.248520   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:31.248552   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:33.792694   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:33.806731   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:33.806802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:33.840813   59674 cri.go:89] found id: ""
	I0722 11:53:33.840842   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.840852   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:33.840859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:33.840930   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:33.878353   59674 cri.go:89] found id: ""
	I0722 11:53:33.878380   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.878388   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:33.878394   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:33.878453   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:33.913894   59674 cri.go:89] found id: ""
	I0722 11:53:33.913927   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.913937   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:33.913944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:33.914007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:33.950659   59674 cri.go:89] found id: ""
	I0722 11:53:33.950689   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.950700   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:33.950706   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:33.950762   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:33.987904   59674 cri.go:89] found id: ""
	I0722 11:53:33.987932   59674 logs.go:276] 0 containers: []
	W0722 11:53:33.987940   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:33.987945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:33.987995   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:34.022877   59674 cri.go:89] found id: ""
	I0722 11:53:34.022900   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.022910   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:34.022918   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:34.022970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:34.056678   59674 cri.go:89] found id: ""
	I0722 11:53:34.056707   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.056717   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:34.056722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:34.056769   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:34.089573   59674 cri.go:89] found id: ""
	I0722 11:53:34.089602   59674 logs.go:276] 0 containers: []
	W0722 11:53:34.089610   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:34.089618   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:34.089630   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:34.161023   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:34.161043   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:34.161058   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:34.243215   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:34.243249   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:34.290788   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:34.290812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:34.339653   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:34.339692   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:31.677233   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.678067   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:31.856615   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:33.857665   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:34.846962   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.847525   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:39.347402   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.857217   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:36.871083   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:36.871150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:36.913807   59674 cri.go:89] found id: ""
	I0722 11:53:36.913833   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.913841   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:36.913847   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:36.913923   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:36.953290   59674 cri.go:89] found id: ""
	I0722 11:53:36.953316   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.953327   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:36.953334   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:36.953395   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:36.990900   59674 cri.go:89] found id: ""
	I0722 11:53:36.990930   59674 logs.go:276] 0 containers: []
	W0722 11:53:36.990938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:36.990943   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:36.990997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:37.034346   59674 cri.go:89] found id: ""
	I0722 11:53:37.034371   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.034381   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:37.034387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:37.034444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:37.071413   59674 cri.go:89] found id: ""
	I0722 11:53:37.071440   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.071451   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:37.071458   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:37.071509   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:37.107034   59674 cri.go:89] found id: ""
	I0722 11:53:37.107065   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.107076   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:37.107084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:37.107143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:37.145505   59674 cri.go:89] found id: ""
	I0722 11:53:37.145528   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.145536   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:37.145545   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:37.145607   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:37.182287   59674 cri.go:89] found id: ""
	I0722 11:53:37.182313   59674 logs.go:276] 0 containers: []
	W0722 11:53:37.182321   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:37.182332   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:37.182343   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:37.195663   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:37.195688   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:37.267451   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:37.267476   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:37.267492   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:37.348532   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:37.348561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:37.396108   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:37.396134   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:39.946775   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:39.959980   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:39.960039   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:39.994172   59674 cri.go:89] found id: ""
	I0722 11:53:39.994198   59674 logs.go:276] 0 containers: []
	W0722 11:53:39.994208   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:39.994213   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:39.994269   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:40.032782   59674 cri.go:89] found id: ""
	I0722 11:53:40.032813   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.032823   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:40.032830   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:40.032890   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:40.067503   59674 cri.go:89] found id: ""
	I0722 11:53:40.067525   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.067532   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:40.067537   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:40.067593   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:40.102234   59674 cri.go:89] found id: ""
	I0722 11:53:40.102262   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.102273   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:40.102280   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:40.102342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:40.135152   59674 cri.go:89] found id: ""
	I0722 11:53:40.135180   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.135190   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:40.135197   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:40.135262   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:40.168930   59674 cri.go:89] found id: ""
	I0722 11:53:40.168958   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.168978   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:40.168993   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:40.169056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:40.209032   59674 cri.go:89] found id: ""
	I0722 11:53:40.209058   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.209065   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:40.209071   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:40.209131   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:40.243952   59674 cri.go:89] found id: ""
	I0722 11:53:40.243976   59674 logs.go:276] 0 containers: []
	W0722 11:53:40.243984   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:40.243993   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:40.244006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:40.297909   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:40.297944   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:40.313359   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:40.313385   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:40.391089   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:40.391118   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:40.391136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:36.178616   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.677556   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:36.356964   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:38.857992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.847033   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:44.346087   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:40.469622   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:40.469652   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.010264   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:43.023750   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:43.023823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:43.058899   59674 cri.go:89] found id: ""
	I0722 11:53:43.058922   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.058930   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:43.058937   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:43.058999   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:43.093308   59674 cri.go:89] found id: ""
	I0722 11:53:43.093328   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.093336   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:43.093341   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:43.093385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:43.126617   59674 cri.go:89] found id: ""
	I0722 11:53:43.126648   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.126671   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:43.126686   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:43.126737   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:43.159455   59674 cri.go:89] found id: ""
	I0722 11:53:43.159482   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.159492   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:43.159500   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:43.159561   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:43.195726   59674 cri.go:89] found id: ""
	I0722 11:53:43.195749   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.195758   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:43.195766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:43.195830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:43.231996   59674 cri.go:89] found id: ""
	I0722 11:53:43.232025   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.232038   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:43.232046   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:43.232118   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:43.266911   59674 cri.go:89] found id: ""
	I0722 11:53:43.266936   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.266943   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:43.266948   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:43.267005   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:43.303202   59674 cri.go:89] found id: ""
	I0722 11:53:43.303227   59674 logs.go:276] 0 containers: []
	W0722 11:53:43.303236   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:43.303243   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:43.303255   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:43.377328   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:43.377362   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:43.418732   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:43.418759   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:43.471507   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:43.471536   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:43.485141   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:43.485175   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:43.557071   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:41.178042   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.178179   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:41.357090   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:43.856788   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.346435   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.347938   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:46.057361   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:46.071701   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:46.071784   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:46.107818   59674 cri.go:89] found id: ""
	I0722 11:53:46.107845   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.107853   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:46.107859   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:46.107952   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:46.141871   59674 cri.go:89] found id: ""
	I0722 11:53:46.141898   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.141906   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:46.141911   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:46.141972   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:46.180980   59674 cri.go:89] found id: ""
	I0722 11:53:46.181004   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.181014   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:46.181021   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:46.181083   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:46.219765   59674 cri.go:89] found id: ""
	I0722 11:53:46.219797   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.219806   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:46.219812   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:46.219866   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:46.259517   59674 cri.go:89] found id: ""
	I0722 11:53:46.259544   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.259554   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:46.259562   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:46.259621   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:46.292190   59674 cri.go:89] found id: ""
	I0722 11:53:46.292220   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.292230   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:46.292239   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:46.292305   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:46.325494   59674 cri.go:89] found id: ""
	I0722 11:53:46.325519   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.325529   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:46.325536   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:46.325608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:46.364367   59674 cri.go:89] found id: ""
	I0722 11:53:46.364403   59674 logs.go:276] 0 containers: []
	W0722 11:53:46.364412   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:46.364422   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:46.364435   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:46.417749   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:46.417792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:46.433793   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:46.433817   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:46.502075   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:46.502098   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:46.502111   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:46.584038   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:46.584075   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:49.127895   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:49.141601   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:49.141672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:49.175251   59674 cri.go:89] found id: ""
	I0722 11:53:49.175276   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.175284   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:49.175290   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:49.175346   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:49.214504   59674 cri.go:89] found id: ""
	I0722 11:53:49.214552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.214563   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:49.214570   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:49.214631   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:49.251844   59674 cri.go:89] found id: ""
	I0722 11:53:49.251872   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.251882   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:49.251889   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:49.251955   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:49.285540   59674 cri.go:89] found id: ""
	I0722 11:53:49.285569   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.285577   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:49.285582   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:49.285630   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:49.323300   59674 cri.go:89] found id: ""
	I0722 11:53:49.323321   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.323331   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:49.323336   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:49.323393   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:49.361571   59674 cri.go:89] found id: ""
	I0722 11:53:49.361599   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.361609   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:49.361615   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:49.361675   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:49.398709   59674 cri.go:89] found id: ""
	I0722 11:53:49.398736   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.398747   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:49.398753   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:49.398813   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:49.430527   59674 cri.go:89] found id: ""
	I0722 11:53:49.430552   59674 logs.go:276] 0 containers: []
	W0722 11:53:49.430564   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:49.430576   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:49.430591   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:49.481517   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:49.481557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:49.496069   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:49.496094   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:49.563515   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:49.563536   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:49.563549   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:49.645313   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:49.645354   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:45.678130   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.179309   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:45.857932   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:48.356438   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.356527   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:50.348077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.846675   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.188460   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:52.201620   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:52.201689   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:52.238836   59674 cri.go:89] found id: ""
	I0722 11:53:52.238858   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.238865   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:52.238870   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:52.238932   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:52.275739   59674 cri.go:89] found id: ""
	I0722 11:53:52.275760   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.275768   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:52.275781   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:52.275839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:52.310362   59674 cri.go:89] found id: ""
	I0722 11:53:52.310390   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.310397   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:52.310402   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:52.310461   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:52.348733   59674 cri.go:89] found id: ""
	I0722 11:53:52.348753   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.348760   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:52.348766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:52.348822   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:52.383052   59674 cri.go:89] found id: ""
	I0722 11:53:52.383079   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.383087   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:52.383094   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:52.383155   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:52.420557   59674 cri.go:89] found id: ""
	I0722 11:53:52.420579   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.420587   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:52.420592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:52.420655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:52.454027   59674 cri.go:89] found id: ""
	I0722 11:53:52.454057   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.454066   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:52.454073   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:52.454134   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:52.495433   59674 cri.go:89] found id: ""
	I0722 11:53:52.495458   59674 logs.go:276] 0 containers: []
	W0722 11:53:52.495469   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:52.495480   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:52.495493   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:52.541383   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:52.541417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:52.595687   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:52.595733   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:52.609965   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:52.609987   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:52.687531   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:52.687552   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:52.687565   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.270419   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:55.284577   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:55.284632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:55.321978   59674 cri.go:89] found id: ""
	I0722 11:53:55.322014   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.322023   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:55.322030   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:55.322092   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:55.358710   59674 cri.go:89] found id: ""
	I0722 11:53:55.358736   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.358746   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:55.358753   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:55.358807   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:55.394784   59674 cri.go:89] found id: ""
	I0722 11:53:55.394810   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.394820   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:55.394827   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:55.394884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:50.677072   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.678016   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.177624   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:52.356565   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:54.357061   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.347422   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:57.846266   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:55.429035   59674 cri.go:89] found id: ""
	I0722 11:53:55.429059   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.429066   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:55.429072   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:55.429122   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:55.464733   59674 cri.go:89] found id: ""
	I0722 11:53:55.464754   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.464761   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:55.464767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:55.464824   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:55.500113   59674 cri.go:89] found id: ""
	I0722 11:53:55.500140   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.500152   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:55.500164   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:55.500227   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:55.536013   59674 cri.go:89] found id: ""
	I0722 11:53:55.536040   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.536050   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:55.536056   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:55.536129   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:55.575385   59674 cri.go:89] found id: ""
	I0722 11:53:55.575412   59674 logs.go:276] 0 containers: []
	W0722 11:53:55.575420   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:55.575428   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:55.575439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:55.628427   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:55.628459   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:55.642648   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:55.642677   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:55.715236   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:55.715258   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:55.715270   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:55.794200   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:55.794233   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:58.336329   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:53:58.351000   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:53:58.351056   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:53:58.389817   59674 cri.go:89] found id: ""
	I0722 11:53:58.389841   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.389849   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:53:58.389854   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:53:58.389902   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:53:58.430814   59674 cri.go:89] found id: ""
	I0722 11:53:58.430843   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.430852   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:53:58.430857   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:53:58.430917   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:53:58.477898   59674 cri.go:89] found id: ""
	I0722 11:53:58.477928   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.477938   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:53:58.477947   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:53:58.477992   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:53:58.513426   59674 cri.go:89] found id: ""
	I0722 11:53:58.513450   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.513461   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:53:58.513468   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:53:58.513530   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:53:58.546455   59674 cri.go:89] found id: ""
	I0722 11:53:58.546484   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.546494   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:53:58.546501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:53:58.546560   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:53:58.582248   59674 cri.go:89] found id: ""
	I0722 11:53:58.582273   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.582280   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:53:58.582286   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:53:58.582339   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:53:58.617221   59674 cri.go:89] found id: ""
	I0722 11:53:58.617246   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.617253   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:53:58.617259   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:53:58.617321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:53:58.648896   59674 cri.go:89] found id: ""
	I0722 11:53:58.648930   59674 logs.go:276] 0 containers: []
	W0722 11:53:58.648941   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:53:58.648949   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:53:58.648962   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:53:58.701735   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:53:58.701771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:53:58.715747   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:53:58.715766   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:53:58.782104   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:53:58.782125   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:53:58.782136   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:53:58.868634   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:53:58.868664   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:53:57.677281   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:00.179188   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:56.856873   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:58.864754   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:53:59.846378   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:02.346626   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.346748   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.410874   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:01.423839   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:01.423914   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:01.460156   59674 cri.go:89] found id: ""
	I0722 11:54:01.460181   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.460191   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:01.460198   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:01.460252   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:01.497130   59674 cri.go:89] found id: ""
	I0722 11:54:01.497156   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.497165   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:01.497172   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:01.497228   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:01.532805   59674 cri.go:89] found id: ""
	I0722 11:54:01.532832   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.532842   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:01.532849   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:01.532907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:01.569955   59674 cri.go:89] found id: ""
	I0722 11:54:01.569989   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.569999   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:01.570014   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:01.570067   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:01.602937   59674 cri.go:89] found id: ""
	I0722 11:54:01.602967   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.602977   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:01.602983   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:01.603033   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:01.634250   59674 cri.go:89] found id: ""
	I0722 11:54:01.634276   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.634283   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:01.634289   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:01.634337   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:01.670256   59674 cri.go:89] found id: ""
	I0722 11:54:01.670286   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.670295   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:01.670300   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:01.670348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:01.708555   59674 cri.go:89] found id: ""
	I0722 11:54:01.708577   59674 logs.go:276] 0 containers: []
	W0722 11:54:01.708584   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:01.708592   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:01.708603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:01.723065   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:01.723090   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:01.790642   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:01.790662   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:01.790673   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:01.887827   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:01.887861   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:01.927121   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:01.927143   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.479248   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:04.493038   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:04.493101   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:04.527516   59674 cri.go:89] found id: ""
	I0722 11:54:04.527539   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.527547   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:04.527557   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:04.527603   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:04.565830   59674 cri.go:89] found id: ""
	I0722 11:54:04.565863   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.565874   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:04.565882   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:04.565970   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:04.606198   59674 cri.go:89] found id: ""
	I0722 11:54:04.606223   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.606235   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:04.606242   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:04.606301   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:04.650372   59674 cri.go:89] found id: ""
	I0722 11:54:04.650394   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.650403   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:04.650411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:04.650473   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:04.689556   59674 cri.go:89] found id: ""
	I0722 11:54:04.689580   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.689587   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:04.689592   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:04.689648   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:04.724954   59674 cri.go:89] found id: ""
	I0722 11:54:04.724986   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.724997   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:04.725004   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:04.725057   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:04.769000   59674 cri.go:89] found id: ""
	I0722 11:54:04.769024   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.769031   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:04.769037   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:04.769088   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:04.802022   59674 cri.go:89] found id: ""
	I0722 11:54:04.802042   59674 logs.go:276] 0 containers: []
	W0722 11:54:04.802049   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:04.802057   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:04.802067   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:04.855969   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:04.856006   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:04.871210   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:04.871238   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:04.938050   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:04.938069   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:04.938082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:05.014415   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:05.014449   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:02.677036   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:04.677779   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:01.356993   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:03.856173   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:06.847195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:08.847333   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.556725   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:07.583525   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:07.583600   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:07.618546   59674 cri.go:89] found id: ""
	I0722 11:54:07.618574   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.618584   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:07.618591   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:07.618651   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:07.655218   59674 cri.go:89] found id: ""
	I0722 11:54:07.655247   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.655256   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:07.655261   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:07.655321   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:07.695453   59674 cri.go:89] found id: ""
	I0722 11:54:07.695482   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.695491   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:07.695499   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:07.695558   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:07.729887   59674 cri.go:89] found id: ""
	I0722 11:54:07.729922   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.729932   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:07.729939   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:07.729998   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:07.768429   59674 cri.go:89] found id: ""
	I0722 11:54:07.768451   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.768458   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:07.768464   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:07.768520   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:07.804372   59674 cri.go:89] found id: ""
	I0722 11:54:07.804408   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.804419   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:07.804426   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:07.804479   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:07.840924   59674 cri.go:89] found id: ""
	I0722 11:54:07.840948   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.840958   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:07.840965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:07.841027   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:07.877796   59674 cri.go:89] found id: ""
	I0722 11:54:07.877823   59674 logs.go:276] 0 containers: []
	W0722 11:54:07.877830   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:07.877838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:07.877849   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:07.930437   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:07.930467   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:07.943581   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:07.943611   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:08.013944   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:08.013963   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:08.013973   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:08.090969   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:08.091007   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:07.178423   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:09.178648   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:05.856697   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:07.857718   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.356584   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:11.345407   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.346477   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:10.631507   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:10.644886   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:10.644958   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:10.679242   59674 cri.go:89] found id: ""
	I0722 11:54:10.679268   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.679278   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:10.679284   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:10.679340   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:10.714324   59674 cri.go:89] found id: ""
	I0722 11:54:10.714351   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.714358   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:10.714364   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:10.714425   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:10.751053   59674 cri.go:89] found id: ""
	I0722 11:54:10.751075   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.751090   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:10.751097   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:10.751164   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:10.788736   59674 cri.go:89] found id: ""
	I0722 11:54:10.788765   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.788775   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:10.788782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:10.788899   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:10.823780   59674 cri.go:89] found id: ""
	I0722 11:54:10.823804   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.823814   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:10.823821   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:10.823884   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:10.859708   59674 cri.go:89] found id: ""
	I0722 11:54:10.859731   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.859741   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:10.859748   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:10.859804   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:10.893364   59674 cri.go:89] found id: ""
	I0722 11:54:10.893390   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.893400   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:10.893409   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:10.893471   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:10.929444   59674 cri.go:89] found id: ""
	I0722 11:54:10.929472   59674 logs.go:276] 0 containers: []
	W0722 11:54:10.929481   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:10.929489   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:10.929501   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:10.968567   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:10.968598   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:11.024447   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:11.024484   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:11.039405   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:11.039429   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:11.116322   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:11.116341   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:11.116356   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:13.697581   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:13.711738   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:13.711831   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:13.747711   59674 cri.go:89] found id: ""
	I0722 11:54:13.747742   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.747750   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:13.747757   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:13.747812   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:13.790965   59674 cri.go:89] found id: ""
	I0722 11:54:13.790987   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.790997   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:13.791005   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:13.791053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:13.829043   59674 cri.go:89] found id: ""
	I0722 11:54:13.829071   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.829080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:13.829086   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:13.829159   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:13.865542   59674 cri.go:89] found id: ""
	I0722 11:54:13.865560   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.865567   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:13.865572   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:13.865615   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:13.897709   59674 cri.go:89] found id: ""
	I0722 11:54:13.897749   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.897762   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:13.897769   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:13.897833   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:13.931319   59674 cri.go:89] found id: ""
	I0722 11:54:13.931339   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.931348   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:13.931355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:13.931409   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:13.987927   59674 cri.go:89] found id: ""
	I0722 11:54:13.987954   59674 logs.go:276] 0 containers: []
	W0722 11:54:13.987964   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:13.987970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:13.988030   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:14.028680   59674 cri.go:89] found id: ""
	I0722 11:54:14.028706   59674 logs.go:276] 0 containers: []
	W0722 11:54:14.028716   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:14.028726   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:14.028743   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:14.089863   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:14.089904   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:14.103664   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:14.103691   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:14.174453   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:14.174479   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:14.174496   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:14.260748   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:14.260780   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:11.677037   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:13.679784   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:12.856073   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:14.857810   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:15.846577   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.846873   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:16.800474   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:16.814408   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:16.814472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:16.849936   59674 cri.go:89] found id: ""
	I0722 11:54:16.849963   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.849972   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:16.849979   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:16.850037   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:16.884323   59674 cri.go:89] found id: ""
	I0722 11:54:16.884349   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.884360   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:16.884367   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:16.884445   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:16.921549   59674 cri.go:89] found id: ""
	I0722 11:54:16.921635   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.921652   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:16.921660   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:16.921726   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:16.959670   59674 cri.go:89] found id: ""
	I0722 11:54:16.959701   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.959711   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:16.959719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:16.959779   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:16.995577   59674 cri.go:89] found id: ""
	I0722 11:54:16.995605   59674 logs.go:276] 0 containers: []
	W0722 11:54:16.995615   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:16.995624   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:16.995683   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:17.032026   59674 cri.go:89] found id: ""
	I0722 11:54:17.032056   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.032067   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:17.032075   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:17.032156   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:17.068309   59674 cri.go:89] found id: ""
	I0722 11:54:17.068337   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.068348   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:17.068355   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:17.068433   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:17.106731   59674 cri.go:89] found id: ""
	I0722 11:54:17.106760   59674 logs.go:276] 0 containers: []
	W0722 11:54:17.106776   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:17.106787   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:17.106801   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:17.159944   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:17.159971   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:17.174479   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:17.174513   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:17.249311   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:17.249332   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:17.249345   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:17.335527   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:17.335561   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:19.874791   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:19.892887   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:19.892961   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:19.945700   59674 cri.go:89] found id: ""
	I0722 11:54:19.945729   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.945737   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:19.945742   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:19.945799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:19.996027   59674 cri.go:89] found id: ""
	I0722 11:54:19.996062   59674 logs.go:276] 0 containers: []
	W0722 11:54:19.996072   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:19.996078   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:19.996133   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:20.040793   59674 cri.go:89] found id: ""
	I0722 11:54:20.040820   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.040830   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:20.040837   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:20.040906   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:20.073737   59674 cri.go:89] found id: ""
	I0722 11:54:20.073760   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.073768   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:20.073774   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:20.073817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:20.108255   59674 cri.go:89] found id: ""
	I0722 11:54:20.108280   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.108287   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:20.108294   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:20.108342   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:20.143140   59674 cri.go:89] found id: ""
	I0722 11:54:20.143165   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.143174   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:20.143180   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:20.143225   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:20.177009   59674 cri.go:89] found id: ""
	I0722 11:54:20.177030   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.177037   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:20.177043   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:20.177089   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:20.215743   59674 cri.go:89] found id: ""
	I0722 11:54:20.215765   59674 logs.go:276] 0 containers: []
	W0722 11:54:20.215773   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:20.215781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:20.215791   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:20.267872   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:20.267905   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:20.281601   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:20.281626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:20.352347   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:20.352364   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:20.352376   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:16.178494   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:18.676724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:17.357519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:19.856259   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.346488   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:22.847018   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:20.431695   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:20.431727   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:22.974218   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:22.988161   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:22.988235   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:23.024542   59674 cri.go:89] found id: ""
	I0722 11:54:23.024571   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.024581   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:23.024588   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:23.024656   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:23.067343   59674 cri.go:89] found id: ""
	I0722 11:54:23.067367   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.067376   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:23.067383   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:23.067443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:23.103711   59674 cri.go:89] found id: ""
	I0722 11:54:23.103741   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.103751   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:23.103758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:23.103817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:23.137896   59674 cri.go:89] found id: ""
	I0722 11:54:23.137926   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.137937   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:23.137944   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:23.138002   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:23.174689   59674 cri.go:89] found id: ""
	I0722 11:54:23.174722   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.174733   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:23.174742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:23.174795   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:23.208669   59674 cri.go:89] found id: ""
	I0722 11:54:23.208690   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.208700   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:23.208708   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:23.208766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:23.243286   59674 cri.go:89] found id: ""
	I0722 11:54:23.243314   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.243326   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:23.243335   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:23.243401   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:23.279277   59674 cri.go:89] found id: ""
	I0722 11:54:23.279303   59674 logs.go:276] 0 containers: []
	W0722 11:54:23.279312   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:23.279324   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:23.279337   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:23.332016   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:23.332045   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:23.346383   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:23.346417   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:23.421449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:23.421471   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:23.421486   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:23.507395   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:23.507432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:20.678148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:23.180048   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:21.856482   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:24.357098   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:25.346414   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:27.847108   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.053610   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:26.068359   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:26.068448   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:26.102425   59674 cri.go:89] found id: ""
	I0722 11:54:26.102454   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.102465   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:26.102472   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:26.102531   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:26.135572   59674 cri.go:89] found id: ""
	I0722 11:54:26.135598   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.135608   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:26.135616   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:26.135682   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:26.175015   59674 cri.go:89] found id: ""
	I0722 11:54:26.175044   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.175054   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:26.175062   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:26.175123   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:26.209186   59674 cri.go:89] found id: ""
	I0722 11:54:26.209209   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.209216   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:26.209221   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:26.209275   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:26.248477   59674 cri.go:89] found id: ""
	I0722 11:54:26.248500   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.248507   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:26.248512   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:26.248590   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:26.281481   59674 cri.go:89] found id: ""
	I0722 11:54:26.281506   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.281515   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:26.281520   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:26.281580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:26.314467   59674 cri.go:89] found id: ""
	I0722 11:54:26.314496   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.314503   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:26.314509   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:26.314556   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:26.349396   59674 cri.go:89] found id: ""
	I0722 11:54:26.349422   59674 logs.go:276] 0 containers: []
	W0722 11:54:26.349431   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:26.349441   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:26.349454   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:26.403227   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:26.403253   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:26.415860   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:26.415882   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:26.484768   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:26.484793   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:26.484809   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:26.563360   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:26.563396   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:29.103764   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:29.117120   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:29.117193   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:29.153198   59674 cri.go:89] found id: ""
	I0722 11:54:29.153241   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.153252   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:29.153260   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:29.153324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:29.190406   59674 cri.go:89] found id: ""
	I0722 11:54:29.190426   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.190433   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:29.190438   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:29.190486   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:29.232049   59674 cri.go:89] found id: ""
	I0722 11:54:29.232073   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.232080   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:29.232085   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:29.232147   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:29.270174   59674 cri.go:89] found id: ""
	I0722 11:54:29.270200   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.270208   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:29.270218   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:29.270268   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:29.307709   59674 cri.go:89] found id: ""
	I0722 11:54:29.307733   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.307740   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:29.307746   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:29.307802   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:29.343807   59674 cri.go:89] found id: ""
	I0722 11:54:29.343832   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.343842   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:29.343850   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:29.343907   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:29.380240   59674 cri.go:89] found id: ""
	I0722 11:54:29.380263   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.380270   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:29.380276   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:29.380332   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:29.412785   59674 cri.go:89] found id: ""
	I0722 11:54:29.412811   59674 logs.go:276] 0 containers: []
	W0722 11:54:29.412820   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:29.412830   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:29.412844   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:29.470948   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:29.470985   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:29.485120   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:29.485146   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:29.558760   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:29.558778   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:29.558792   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:29.638093   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:29.638123   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:25.677216   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.177196   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.179148   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:26.357390   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:28.856928   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:30.345586   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.346444   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.347606   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:32.183511   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:32.196719   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:32.196796   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:32.229436   59674 cri.go:89] found id: ""
	I0722 11:54:32.229466   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.229474   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:32.229480   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:32.229533   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:32.271971   59674 cri.go:89] found id: ""
	I0722 11:54:32.271998   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.272008   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:32.272017   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:32.272086   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:32.302967   59674 cri.go:89] found id: ""
	I0722 11:54:32.302991   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.302999   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:32.303005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:32.303053   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.334443   59674 cri.go:89] found id: ""
	I0722 11:54:32.334468   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.334478   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:32.334485   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:32.334544   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:32.371586   59674 cri.go:89] found id: ""
	I0722 11:54:32.371612   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.371622   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:32.371630   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:32.371693   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:32.419920   59674 cri.go:89] found id: ""
	I0722 11:54:32.419954   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.419966   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:32.419974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:32.420034   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:32.459377   59674 cri.go:89] found id: ""
	I0722 11:54:32.459398   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.459405   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:32.459411   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:32.459472   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:32.500740   59674 cri.go:89] found id: ""
	I0722 11:54:32.500764   59674 logs.go:276] 0 containers: []
	W0722 11:54:32.500771   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:32.500781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:32.500796   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:32.551285   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:32.551316   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:32.564448   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:32.564474   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:32.637652   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:32.637679   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:32.637694   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:32.721599   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:32.721638   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:35.265202   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:35.278766   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:35.278844   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:35.312545   59674 cri.go:89] found id: ""
	I0722 11:54:35.312574   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.312582   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:35.312587   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:35.312637   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:35.346988   59674 cri.go:89] found id: ""
	I0722 11:54:35.347014   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.347024   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:35.347032   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:35.347090   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:35.382876   59674 cri.go:89] found id: ""
	I0722 11:54:35.382908   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.382920   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:35.382929   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:35.382997   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:32.677327   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:34.677947   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:31.356011   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:33.356576   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:36.846349   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.346311   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.418093   59674 cri.go:89] found id: ""
	I0722 11:54:35.418115   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.418122   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:35.418129   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:35.418186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:35.455262   59674 cri.go:89] found id: ""
	I0722 11:54:35.455291   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.455301   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:35.455306   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:35.455362   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:35.494893   59674 cri.go:89] found id: ""
	I0722 11:54:35.494924   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.494934   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:35.494945   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:35.495007   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:35.529768   59674 cri.go:89] found id: ""
	I0722 11:54:35.529791   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.529798   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:35.529804   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:35.529850   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:35.564972   59674 cri.go:89] found id: ""
	I0722 11:54:35.565001   59674 logs.go:276] 0 containers: []
	W0722 11:54:35.565012   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:35.565024   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:35.565039   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:35.615985   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:35.616025   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:35.630133   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:35.630156   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:35.699669   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:35.699697   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:35.699711   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:35.779737   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:35.779771   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:38.320368   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:38.334371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:38.334443   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:38.371050   59674 cri.go:89] found id: ""
	I0722 11:54:38.371081   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.371088   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:38.371109   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:38.371170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:38.410676   59674 cri.go:89] found id: ""
	I0722 11:54:38.410698   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.410706   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:38.410712   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:38.410770   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:38.447331   59674 cri.go:89] found id: ""
	I0722 11:54:38.447357   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.447366   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:38.447371   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:38.447426   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:38.483548   59674 cri.go:89] found id: ""
	I0722 11:54:38.483589   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.483600   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:38.483608   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:38.483669   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:38.521694   59674 cri.go:89] found id: ""
	I0722 11:54:38.521723   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.521737   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:38.521742   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:38.521799   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:38.560507   59674 cri.go:89] found id: ""
	I0722 11:54:38.560532   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.560543   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:38.560550   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:38.560609   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:38.595734   59674 cri.go:89] found id: ""
	I0722 11:54:38.595761   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.595771   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:38.595778   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:38.595839   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:38.634176   59674 cri.go:89] found id: ""
	I0722 11:54:38.634198   59674 logs.go:276] 0 containers: []
	W0722 11:54:38.634205   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:38.634213   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:38.634224   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:38.688196   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:38.688235   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:38.701554   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:38.701583   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:38.772547   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:38.772575   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:38.772590   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:38.858025   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:38.858056   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:37.179449   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:39.179903   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:35.856424   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:38.357566   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.347531   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:43.846195   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:41.400777   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:41.415370   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:41.415427   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:41.448023   59674 cri.go:89] found id: ""
	I0722 11:54:41.448045   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.448052   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:41.448058   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:41.448104   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:41.480745   59674 cri.go:89] found id: ""
	I0722 11:54:41.480766   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.480774   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:41.480779   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:41.480830   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:41.514627   59674 cri.go:89] found id: ""
	I0722 11:54:41.514651   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.514666   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:41.514673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:41.514731   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:41.548226   59674 cri.go:89] found id: ""
	I0722 11:54:41.548255   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.548267   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:41.548274   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:41.548325   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:41.581361   59674 cri.go:89] found id: ""
	I0722 11:54:41.581383   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.581390   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:41.581396   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:41.581452   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:41.616249   59674 cri.go:89] found id: ""
	I0722 11:54:41.616277   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.616287   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:41.616295   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:41.616361   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:41.651569   59674 cri.go:89] found id: ""
	I0722 11:54:41.651593   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.651601   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:41.651607   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:41.651657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:41.685173   59674 cri.go:89] found id: ""
	I0722 11:54:41.685194   59674 logs.go:276] 0 containers: []
	W0722 11:54:41.685202   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:41.685209   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:41.685222   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:41.762374   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:41.762393   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:41.762405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:41.843370   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:41.843403   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.883097   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:41.883127   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:41.933824   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:41.933854   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.447568   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:44.461528   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:44.461608   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:44.497926   59674 cri.go:89] found id: ""
	I0722 11:54:44.497951   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.497958   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:44.497963   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:44.498023   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:44.534483   59674 cri.go:89] found id: ""
	I0722 11:54:44.534507   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.534515   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:44.534520   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:44.534565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:44.573106   59674 cri.go:89] found id: ""
	I0722 11:54:44.573140   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.573148   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:44.573154   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:44.573204   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:44.610565   59674 cri.go:89] found id: ""
	I0722 11:54:44.610612   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.610626   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:44.610634   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:44.610697   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:44.646946   59674 cri.go:89] found id: ""
	I0722 11:54:44.646980   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.646994   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:44.647001   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:44.647060   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:44.685876   59674 cri.go:89] found id: ""
	I0722 11:54:44.685904   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.685913   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:44.685919   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:44.685969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:44.720398   59674 cri.go:89] found id: ""
	I0722 11:54:44.720425   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.720434   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:44.720441   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:44.720506   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:44.757472   59674 cri.go:89] found id: ""
	I0722 11:54:44.757501   59674 logs.go:276] 0 containers: []
	W0722 11:54:44.757511   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:44.757522   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:44.757535   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:44.807442   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:44.807468   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:44.820432   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:44.820457   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:44.892182   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:44.892199   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:44.892209   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:44.976545   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:44.976580   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:41.677120   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.178554   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:40.855578   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:42.856278   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:44.857519   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:45.846257   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.846886   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:47.519413   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:47.532974   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:47.533035   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:47.570869   59674 cri.go:89] found id: ""
	I0722 11:54:47.570904   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.570915   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:47.570923   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:47.571055   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:47.606020   59674 cri.go:89] found id: ""
	I0722 11:54:47.606045   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.606052   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:47.606057   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:47.606106   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:47.642717   59674 cri.go:89] found id: ""
	I0722 11:54:47.642741   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.642752   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:47.642758   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:47.642817   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:47.677761   59674 cri.go:89] found id: ""
	I0722 11:54:47.677786   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.677796   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:47.677803   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:47.677863   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:47.710989   59674 cri.go:89] found id: ""
	I0722 11:54:47.711016   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.711025   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:47.711032   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:47.711097   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:47.744814   59674 cri.go:89] found id: ""
	I0722 11:54:47.744839   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.744847   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:47.744853   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:47.744904   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:47.778926   59674 cri.go:89] found id: ""
	I0722 11:54:47.778953   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.778960   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:47.778965   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:47.779015   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:47.818419   59674 cri.go:89] found id: ""
	I0722 11:54:47.818458   59674 logs.go:276] 0 containers: []
	W0722 11:54:47.818465   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:47.818473   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:47.818485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:47.870867   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:47.870892   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:47.884504   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:47.884523   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:47.952449   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:47.952470   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:47.952485   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:48.035731   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:48.035763   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:46.181522   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:48.676888   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:46.860517   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:49.356456   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.346125   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:52.848790   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:50.589071   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:50.602786   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:50.602880   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:50.638324   59674 cri.go:89] found id: ""
	I0722 11:54:50.638355   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.638366   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:50.638375   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:50.638438   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:50.674906   59674 cri.go:89] found id: ""
	I0722 11:54:50.674932   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.674947   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:50.674955   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:50.675017   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:50.709284   59674 cri.go:89] found id: ""
	I0722 11:54:50.709313   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.709322   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:50.709328   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:50.709387   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:50.748595   59674 cri.go:89] found id: ""
	I0722 11:54:50.748623   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.748632   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:50.748638   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:50.748695   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:50.782681   59674 cri.go:89] found id: ""
	I0722 11:54:50.782707   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.782716   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:50.782721   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:50.782797   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:50.820037   59674 cri.go:89] found id: ""
	I0722 11:54:50.820067   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.820077   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:50.820084   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:50.820150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:50.857807   59674 cri.go:89] found id: ""
	I0722 11:54:50.857835   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.857845   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:50.857852   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:50.857925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:50.894924   59674 cri.go:89] found id: ""
	I0722 11:54:50.894946   59674 logs.go:276] 0 containers: []
	W0722 11:54:50.894954   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:50.894962   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:50.894981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:50.947373   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:50.947407   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.962243   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:50.962272   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:51.041450   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:51.041474   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:51.041488   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:51.133982   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:51.134018   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:53.678461   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:53.691710   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:53.691778   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:53.726266   59674 cri.go:89] found id: ""
	I0722 11:54:53.726294   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.726305   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:53.726313   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:53.726366   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:53.759262   59674 cri.go:89] found id: ""
	I0722 11:54:53.759291   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.759303   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:53.759311   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:53.759381   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:53.795859   59674 cri.go:89] found id: ""
	I0722 11:54:53.795894   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.795906   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:53.795913   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:53.795975   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:53.842343   59674 cri.go:89] found id: ""
	I0722 11:54:53.842366   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.842379   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:53.842387   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:53.842444   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:53.882648   59674 cri.go:89] found id: ""
	I0722 11:54:53.882674   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.882684   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:53.882691   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:53.882751   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:53.914352   59674 cri.go:89] found id: ""
	I0722 11:54:53.914373   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.914380   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:53.914386   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:53.914442   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:53.952257   59674 cri.go:89] found id: ""
	I0722 11:54:53.952286   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.952296   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:53.952301   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:53.952348   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:53.991612   59674 cri.go:89] found id: ""
	I0722 11:54:53.991642   59674 logs.go:276] 0 containers: []
	W0722 11:54:53.991651   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:53.991661   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:53.991682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:54.065253   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:54.065271   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:54.065285   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:54.153570   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:54.153603   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:54.195100   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:54.195138   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:54.246784   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:54.246812   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:50.677516   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.180319   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.182749   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:51.356623   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:53.856817   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.346845   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:57.846691   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:56.762702   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:56.776501   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:56.776567   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:56.809838   59674 cri.go:89] found id: ""
	I0722 11:54:56.809866   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.809874   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:56.809882   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:56.809934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:56.845567   59674 cri.go:89] found id: ""
	I0722 11:54:56.845594   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.845602   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:56.845610   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:56.845672   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:56.879899   59674 cri.go:89] found id: ""
	I0722 11:54:56.879929   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.879939   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:56.879946   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:56.880000   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:56.911631   59674 cri.go:89] found id: ""
	I0722 11:54:56.911658   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.911667   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:56.911675   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:56.911734   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:54:56.946101   59674 cri.go:89] found id: ""
	I0722 11:54:56.946124   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.946132   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:54:56.946142   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:54:56.946211   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:54:56.980265   59674 cri.go:89] found id: ""
	I0722 11:54:56.980289   59674 logs.go:276] 0 containers: []
	W0722 11:54:56.980301   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:54:56.980308   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:54:56.980367   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:54:57.014902   59674 cri.go:89] found id: ""
	I0722 11:54:57.014935   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.014951   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:54:57.014958   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:54:57.015021   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:54:57.051573   59674 cri.go:89] found id: ""
	I0722 11:54:57.051597   59674 logs.go:276] 0 containers: []
	W0722 11:54:57.051605   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:54:57.051613   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:54:57.051626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:54:57.065650   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:54:57.065683   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:54:57.133230   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:54:57.133257   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:54:57.133275   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:54:57.217002   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:54:57.217038   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.260236   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:54:57.260264   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:54:59.812785   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:54:59.826782   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:54:59.826836   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:54:59.863375   59674 cri.go:89] found id: ""
	I0722 11:54:59.863404   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.863414   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:54:59.863423   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:54:59.863484   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:54:59.902161   59674 cri.go:89] found id: ""
	I0722 11:54:59.902193   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.902204   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:54:59.902211   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:54:59.902263   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:54:59.945153   59674 cri.go:89] found id: ""
	I0722 11:54:59.945182   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.945193   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:54:59.945201   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:54:59.945265   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:54:59.989535   59674 cri.go:89] found id: ""
	I0722 11:54:59.989562   59674 logs.go:276] 0 containers: []
	W0722 11:54:59.989570   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:54:59.989575   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:54:59.989643   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:00.028977   59674 cri.go:89] found id: ""
	I0722 11:55:00.029001   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.029009   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:00.029017   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:00.029068   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:00.065396   59674 cri.go:89] found id: ""
	I0722 11:55:00.065425   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.065437   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:00.065447   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:00.065502   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:00.104354   59674 cri.go:89] found id: ""
	I0722 11:55:00.104397   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.104409   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:00.104417   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:00.104480   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:00.141798   59674 cri.go:89] found id: ""
	I0722 11:55:00.141822   59674 logs.go:276] 0 containers: []
	W0722 11:55:00.141829   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:00.141838   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:00.141853   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:00.195791   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:00.195823   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:00.214812   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:00.214845   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:00.307286   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:00.307311   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:00.307323   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:00.409770   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:00.409805   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:54:57.676737   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.677273   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:55.857348   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:58.356555   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:54:59.846954   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.345998   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.346077   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.951630   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:02.964673   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:02.964728   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:03.005256   59674 cri.go:89] found id: ""
	I0722 11:55:03.005285   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.005296   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:03.005303   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:03.005359   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:03.037558   59674 cri.go:89] found id: ""
	I0722 11:55:03.037587   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.037595   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:03.037600   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:03.037646   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:03.071168   59674 cri.go:89] found id: ""
	I0722 11:55:03.071196   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.071206   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:03.071214   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:03.071271   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:03.104212   59674 cri.go:89] found id: ""
	I0722 11:55:03.104238   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.104248   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:03.104255   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:03.104313   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:03.141378   59674 cri.go:89] found id: ""
	I0722 11:55:03.141401   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.141409   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:03.141414   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:03.141458   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:03.178881   59674 cri.go:89] found id: ""
	I0722 11:55:03.178906   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.178915   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:03.178923   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:03.178987   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:03.215768   59674 cri.go:89] found id: ""
	I0722 11:55:03.215796   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.215804   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:03.215810   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:03.215854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:03.256003   59674 cri.go:89] found id: ""
	I0722 11:55:03.256029   59674 logs.go:276] 0 containers: []
	W0722 11:55:03.256041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:03.256051   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:03.256069   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:03.308182   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:03.308216   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:03.323870   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:03.323903   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:03.406646   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:03.406670   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:03.406682   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:03.490947   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:03.490984   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:01.677312   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:03.677505   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:00.856013   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:02.856211   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:04.857113   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.348448   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:08.846007   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.030341   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:06.046814   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:06.046874   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:06.088735   59674 cri.go:89] found id: ""
	I0722 11:55:06.088756   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.088764   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:06.088770   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:06.088823   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:06.153138   59674 cri.go:89] found id: ""
	I0722 11:55:06.153165   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.153174   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:06.153181   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:06.153241   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:06.203479   59674 cri.go:89] found id: ""
	I0722 11:55:06.203506   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.203516   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:06.203523   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:06.203585   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:06.239632   59674 cri.go:89] found id: ""
	I0722 11:55:06.239661   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.239671   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:06.239678   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:06.239739   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:06.278663   59674 cri.go:89] found id: ""
	I0722 11:55:06.278693   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.278703   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:06.278711   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:06.278772   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:06.318291   59674 cri.go:89] found id: ""
	I0722 11:55:06.318315   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.318323   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:06.318329   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:06.318382   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:06.355362   59674 cri.go:89] found id: ""
	I0722 11:55:06.355383   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.355390   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:06.355395   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:06.355446   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:06.395032   59674 cri.go:89] found id: ""
	I0722 11:55:06.395062   59674 logs.go:276] 0 containers: []
	W0722 11:55:06.395073   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:06.395084   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:06.395098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:06.451585   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:06.451623   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:06.466009   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:06.466037   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:06.534051   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:06.534071   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:06.534082   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:06.617165   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:06.617202   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.155242   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:09.169327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:09.169389   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:09.209138   59674 cri.go:89] found id: ""
	I0722 11:55:09.209165   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.209174   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:09.209181   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:09.209243   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:09.249129   59674 cri.go:89] found id: ""
	I0722 11:55:09.249156   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.249167   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:09.249175   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:09.249237   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:09.284350   59674 cri.go:89] found id: ""
	I0722 11:55:09.284374   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.284400   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:09.284416   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:09.284487   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:09.317288   59674 cri.go:89] found id: ""
	I0722 11:55:09.317315   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.317322   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:09.317327   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:09.317374   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:09.353227   59674 cri.go:89] found id: ""
	I0722 11:55:09.353249   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.353259   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:09.353266   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:09.353324   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:09.388376   59674 cri.go:89] found id: ""
	I0722 11:55:09.388434   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.388442   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:09.388448   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:09.388498   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:09.422197   59674 cri.go:89] found id: ""
	I0722 11:55:09.422221   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.422228   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:09.422235   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:09.422282   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:09.455321   59674 cri.go:89] found id: ""
	I0722 11:55:09.455350   59674 logs.go:276] 0 containers: []
	W0722 11:55:09.455360   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:09.455370   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:09.455384   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:09.536331   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:09.536366   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:09.578847   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:09.578880   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:09.630597   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:09.630626   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:09.644531   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:09.644557   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:09.710502   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:05.677998   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:07.678875   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:10.179254   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:06.857151   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:09.355988   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.345887   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.346945   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:12.210716   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:12.223909   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:12.223969   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:12.259241   59674 cri.go:89] found id: ""
	I0722 11:55:12.259266   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.259275   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:12.259282   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:12.259344   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:12.293967   59674 cri.go:89] found id: ""
	I0722 11:55:12.294000   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.294007   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:12.294013   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:12.294061   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:12.328073   59674 cri.go:89] found id: ""
	I0722 11:55:12.328106   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.328114   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:12.328121   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:12.328180   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.363176   59674 cri.go:89] found id: ""
	I0722 11:55:12.363200   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.363207   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:12.363213   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:12.363287   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:12.398145   59674 cri.go:89] found id: ""
	I0722 11:55:12.398171   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.398180   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:12.398185   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:12.398231   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:12.431824   59674 cri.go:89] found id: ""
	I0722 11:55:12.431853   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.431861   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:12.431867   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:12.431925   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:12.465097   59674 cri.go:89] found id: ""
	I0722 11:55:12.465128   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.465135   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:12.465140   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:12.465186   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:12.502934   59674 cri.go:89] found id: ""
	I0722 11:55:12.502965   59674 logs.go:276] 0 containers: []
	W0722 11:55:12.502974   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:12.502984   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:12.502999   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:12.541950   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:12.541979   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:12.592632   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:12.592660   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:12.606073   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:12.606098   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:12.675388   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:12.675417   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:12.675432   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.253008   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:15.266957   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:15.267028   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:15.303035   59674 cri.go:89] found id: ""
	I0722 11:55:15.303069   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.303080   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:15.303088   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:15.303150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:15.338089   59674 cri.go:89] found id: ""
	I0722 11:55:15.338113   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.338121   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:15.338126   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:15.338184   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:15.376973   59674 cri.go:89] found id: ""
	I0722 11:55:15.376998   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.377005   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:15.377015   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:15.377075   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:12.678613   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.178912   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:11.356248   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:13.855992   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.845568   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:17.845680   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.416466   59674 cri.go:89] found id: ""
	I0722 11:55:15.416491   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.416500   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:15.416507   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:15.416565   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:15.456472   59674 cri.go:89] found id: ""
	I0722 11:55:15.456501   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.456511   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:15.456519   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:15.456580   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:15.491963   59674 cri.go:89] found id: ""
	I0722 11:55:15.491991   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.491999   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:15.492005   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:15.492062   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:15.530819   59674 cri.go:89] found id: ""
	I0722 11:55:15.530847   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.530857   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:15.530865   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:15.530934   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:15.569388   59674 cri.go:89] found id: ""
	I0722 11:55:15.569415   59674 logs.go:276] 0 containers: []
	W0722 11:55:15.569422   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:15.569430   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:15.569439   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:15.623949   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:15.623981   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:15.637828   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:15.637848   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:15.707733   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:15.707754   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:15.707765   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:15.787435   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:15.787473   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:18.329310   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:18.342412   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:18.342476   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:18.379542   59674 cri.go:89] found id: ""
	I0722 11:55:18.379563   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.379570   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:18.379575   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:18.379657   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:18.414442   59674 cri.go:89] found id: ""
	I0722 11:55:18.414468   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.414477   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:18.414483   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:18.414526   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:18.454571   59674 cri.go:89] found id: ""
	I0722 11:55:18.454598   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.454608   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:18.454614   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:18.454658   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:18.491012   59674 cri.go:89] found id: ""
	I0722 11:55:18.491039   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.491047   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:18.491052   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:18.491114   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:18.525923   59674 cri.go:89] found id: ""
	I0722 11:55:18.525952   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.525962   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:18.525970   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:18.526031   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:18.560288   59674 cri.go:89] found id: ""
	I0722 11:55:18.560315   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.560325   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:18.560332   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:18.560412   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:18.596674   59674 cri.go:89] found id: ""
	I0722 11:55:18.596698   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.596706   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:18.596712   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:18.596766   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:18.635012   59674 cri.go:89] found id: ""
	I0722 11:55:18.635034   59674 logs.go:276] 0 containers: []
	W0722 11:55:18.635041   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:18.635049   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:18.635060   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:18.685999   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:18.686024   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:18.700085   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:18.700108   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:18.765465   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:18.765484   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:18.765495   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:18.846554   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:18.846592   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:17.179144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.677144   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:15.857428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:18.356050   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:19.846343   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.345281   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.346147   59477 pod_ready.go:102] pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:21.389684   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:21.401964   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:21.402042   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:21.438128   59674 cri.go:89] found id: ""
	I0722 11:55:21.438156   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.438165   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:21.438171   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:21.438258   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:21.475793   59674 cri.go:89] found id: ""
	I0722 11:55:21.475819   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.475828   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:21.475833   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:21.475878   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:21.510238   59674 cri.go:89] found id: ""
	I0722 11:55:21.510265   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.510273   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:21.510278   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:21.510333   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:21.548293   59674 cri.go:89] found id: ""
	I0722 11:55:21.548320   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.548331   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:21.548337   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:21.548403   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:21.584542   59674 cri.go:89] found id: ""
	I0722 11:55:21.584573   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.584584   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:21.584591   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:21.584655   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:21.621709   59674 cri.go:89] found id: ""
	I0722 11:55:21.621745   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.621758   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:21.621767   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:21.621854   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:21.656111   59674 cri.go:89] found id: ""
	I0722 11:55:21.656134   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.656143   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:21.656148   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:21.656197   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:21.692324   59674 cri.go:89] found id: ""
	I0722 11:55:21.692353   59674 logs.go:276] 0 containers: []
	W0722 11:55:21.692363   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:21.692374   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:21.692405   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:21.769527   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:21.769550   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:21.769566   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.850069   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:21.850107   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:21.890781   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:21.890816   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:21.952170   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:21.952211   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.467001   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:24.481526   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:55:24.481594   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:55:24.518694   59674 cri.go:89] found id: ""
	I0722 11:55:24.518724   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.518734   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:55:24.518740   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:55:24.518798   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:55:24.554606   59674 cri.go:89] found id: ""
	I0722 11:55:24.554629   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.554637   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:55:24.554642   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:55:24.554703   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:55:24.592042   59674 cri.go:89] found id: ""
	I0722 11:55:24.592072   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.592083   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:55:24.592090   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:55:24.592158   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:55:24.624456   59674 cri.go:89] found id: ""
	I0722 11:55:24.624479   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.624487   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:55:24.624493   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:55:24.624540   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:55:24.659502   59674 cri.go:89] found id: ""
	I0722 11:55:24.659526   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.659533   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:55:24.659541   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:55:24.659586   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:55:24.695548   59674 cri.go:89] found id: ""
	I0722 11:55:24.695572   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.695580   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:55:24.695585   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:55:24.695632   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:55:24.730320   59674 cri.go:89] found id: ""
	I0722 11:55:24.730362   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.730383   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:55:24.730391   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:55:24.730451   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:55:24.763002   59674 cri.go:89] found id: ""
	I0722 11:55:24.763031   59674 logs.go:276] 0 containers: []
	W0722 11:55:24.763042   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:55:24.763053   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:55:24.763068   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:55:24.801537   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:55:24.801568   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:55:24.855157   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:55:24.855189   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:55:24.872946   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:55:24.872983   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:55:24.943654   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:55:24.943683   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:55:24.943697   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 11:55:21.677205   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:23.677250   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:20.857023   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:22.857266   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:25.356958   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:24.840700   59477 pod_ready.go:81] duration metric: took 4m0.000727978s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" ...
	E0722 11:55:24.840728   59477 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wm2w8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:55:24.840745   59477 pod_ready.go:38] duration metric: took 4m14.023350526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:55:24.840771   59477 kubeadm.go:597] duration metric: took 4m21.561007849s to restartPrimaryControlPlane
	W0722 11:55:24.840842   59477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:24.840871   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:27.532539   59674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:55:27.551073   59674 kubeadm.go:597] duration metric: took 4m3.599954496s to restartPrimaryControlPlane
	W0722 11:55:27.551154   59674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:55:27.551183   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:55:28.607726   59674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.056515088s)
	I0722 11:55:28.607808   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:28.622638   59674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:28.633327   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:28.643630   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:28.643657   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:28.643708   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:28.655424   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:28.655488   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:28.666415   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:28.678321   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:28.678387   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:28.687990   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.700637   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:28.700688   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:28.711737   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:28.723611   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:28.723672   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:28.734841   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:28.966498   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:55:25.677562   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.677626   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.678017   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:27.359533   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:29.856428   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.177943   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.677244   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:32.356225   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:34.357127   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.677815   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:39.176631   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:36.857121   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:38.857187   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.177346   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.179961   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:41.357029   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:43.857548   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.676921   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:47.677104   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.177979   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:45.858212   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:48.355737   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:50.357352   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.179852   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.678525   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:52.856789   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:54.857581   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.291211   59477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.450312515s)
	I0722 11:55:56.291284   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:55:56.307108   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:55:56.316823   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:55:56.325987   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:55:56.326008   59477 kubeadm.go:157] found existing configuration files:
	
	I0722 11:55:56.326040   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:55:56.334979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:55:56.335022   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:55:56.344230   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:55:56.352903   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:55:56.352952   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:55:56.362589   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.371907   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:55:56.371960   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:55:56.381248   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:55:56.389979   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:55:56.390029   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:55:56.399143   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:55:56.451195   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:55:56.451336   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:55:56.583288   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:55:56.583416   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:55:56.583545   59477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:55:56.812941   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:55:56.814801   59477 out.go:204]   - Generating certificates and keys ...
	I0722 11:55:56.814907   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:55:56.815004   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:55:56.815107   59477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:55:56.815158   59477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:55:56.815245   59477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:55:56.815328   59477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:55:56.815398   59477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:55:56.815472   59477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:55:56.815551   59477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:55:56.815665   59477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:55:56.815720   59477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:55:56.815792   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:55:56.905480   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:55:57.235259   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:55:57.382716   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:55:57.782474   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:55:57.975512   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:55:57.975939   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:55:57.978251   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:55:57.980183   59477 out.go:204]   - Booting up control plane ...
	I0722 11:55:57.980296   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:55:57.980407   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:55:57.980501   59477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:55:57.997417   59477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:55:57.998246   59477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:55:57.998298   59477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:55:58.125569   59477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:55:58.125669   59477 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:55:59.127130   59477 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00142245s
	I0722 11:55:59.127288   59477 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:55:56.679572   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.177683   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:56.858200   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:55:59.356467   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.131970   59477 kubeadm.go:310] [api-check] The API server is healthy after 5.00210234s
	I0722 11:56:04.145149   59477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:04.162087   59477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:04.189220   59477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:04.189501   59477 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-802149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:04.201016   59477 kubeadm.go:310] [bootstrap-token] Using token: kquhfx.1qbb4r033babuox0
	I0722 11:56:04.202331   59477 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:04.202440   59477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:04.207324   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:04.217174   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:04.221591   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:04.225670   59477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:04.229980   59477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:04.540237   59477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:01.677898   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.678604   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:05.015791   59477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:05.538526   59477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:05.539474   59477 kubeadm.go:310] 
	I0722 11:56:05.539573   59477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:05.539585   59477 kubeadm.go:310] 
	I0722 11:56:05.539684   59477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:05.539701   59477 kubeadm.go:310] 
	I0722 11:56:05.539735   59477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:05.539818   59477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:05.539894   59477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:05.539903   59477 kubeadm.go:310] 
	I0722 11:56:05.540003   59477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:05.540026   59477 kubeadm.go:310] 
	I0722 11:56:05.540102   59477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:05.540111   59477 kubeadm.go:310] 
	I0722 11:56:05.540178   59477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:05.540280   59477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:05.540390   59477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:05.540399   59477 kubeadm.go:310] 
	I0722 11:56:05.540496   59477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:05.540612   59477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:05.540621   59477 kubeadm.go:310] 
	I0722 11:56:05.540765   59477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.540917   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:05.540954   59477 kubeadm.go:310] 	--control-plane 
	I0722 11:56:05.540963   59477 kubeadm.go:310] 
	I0722 11:56:05.541073   59477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:05.541082   59477 kubeadm.go:310] 
	I0722 11:56:05.541188   59477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kquhfx.1qbb4r033babuox0 \
	I0722 11:56:05.541330   59477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:05.541765   59477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:05.541892   59477 cni.go:84] Creating CNI manager for ""
	I0722 11:56:05.541910   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:05.543345   59477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:01.357811   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:03.359464   60225 pod_ready.go:102] pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:04.851108   60225 pod_ready.go:81] duration metric: took 4m0.000807254s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:04.851137   60225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mzcvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:04.851154   60225 pod_ready.go:38] duration metric: took 4m12.048821409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:04.851185   60225 kubeadm.go:597] duration metric: took 4m21.969513024s to restartPrimaryControlPlane
	W0722 11:56:04.851256   60225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:04.851288   60225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:05.544535   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:05.556946   59477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:05.578633   59477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:05.578705   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.578715   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-802149 minikube.k8s.io/updated_at=2024_07_22T11_56_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=embed-certs-802149 minikube.k8s.io/primary=true
	I0722 11:56:05.614944   59477 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:05.773354   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.273578   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:06.773980   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.274302   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:07.774175   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.274316   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:08.774096   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:09.273401   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:05.678724   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:08.178575   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:09.774010   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.274337   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.773845   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.273387   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:11.773610   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:12.774429   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.273474   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:13.774397   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:14.273900   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:10.677662   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:12.679646   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:15.177660   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:14.774140   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.273579   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:15.773981   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.273668   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:16.773814   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.274094   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:17.773477   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.273407   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:18.774424   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.274215   59477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:19.371507   59477 kubeadm.go:1113] duration metric: took 13.792861511s to wait for elevateKubeSystemPrivileges
	I0722 11:56:19.371549   59477 kubeadm.go:394] duration metric: took 5m16.138448524s to StartCluster
	I0722 11:56:19.371572   59477 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.371660   59477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:19.373430   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:19.373759   59477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:19.373841   59477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:19.373922   59477 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-802149"
	I0722 11:56:19.373932   59477 addons.go:69] Setting default-storageclass=true in profile "embed-certs-802149"
	I0722 11:56:19.373962   59477 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-802149"
	I0722 11:56:19.373963   59477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-802149"
	W0722 11:56:19.373971   59477 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:19.373974   59477 addons.go:69] Setting metrics-server=true in profile "embed-certs-802149"
	I0722 11:56:19.373998   59477 config.go:182] Loaded profile config "embed-certs-802149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:19.374004   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374013   59477 addons.go:234] Setting addon metrics-server=true in "embed-certs-802149"
	W0722 11:56:19.374021   59477 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:19.374044   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.374353   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374376   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374383   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374390   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.374401   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.374418   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.375347   59477 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:19.376850   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:19.393500   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0722 11:56:19.394178   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.394524   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0722 11:56:19.394704   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0722 11:56:19.394894   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395064   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395087   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395137   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.395433   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395451   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395471   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395586   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.395607   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.395653   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.395754   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.395956   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.396317   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396345   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.396481   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.396512   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.399476   59477 addons.go:234] Setting addon default-storageclass=true in "embed-certs-802149"
	W0722 11:56:19.399502   59477 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:19.399530   59477 host.go:66] Checking if "embed-certs-802149" exists ...
	I0722 11:56:19.399879   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.399908   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.411862   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0722 11:56:19.412247   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.412708   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.412733   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.413106   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.413324   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.414100   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0722 11:56:19.414530   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.415017   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.415041   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.415100   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.415300   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0722 11:56:19.415340   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.415574   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.415662   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.416068   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.416095   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.416416   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.416861   59477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:19.416905   59477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:19.417086   59477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:19.417365   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.418373   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:19.418392   59477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:19.418411   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.419202   59477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:19.420581   59477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.420595   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:19.420608   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.421600   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422201   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.422224   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.422367   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.422535   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.422697   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.422820   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.423577   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424183   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.424211   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.424347   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.424543   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.424694   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.424812   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.432998   59477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0722 11:56:19.433395   59477 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:19.433820   59477 main.go:141] libmachine: Using API Version  1
	I0722 11:56:19.433837   59477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:19.434137   59477 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:19.434300   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetState
	I0722 11:56:19.435804   59477 main.go:141] libmachine: (embed-certs-802149) Calling .DriverName
	I0722 11:56:19.436013   59477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.436029   59477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:19.436043   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHHostname
	I0722 11:56:19.439161   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439507   59477 main.go:141] libmachine: (embed-certs-802149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:8a", ip: ""} in network mk-embed-certs-802149: {Iface:virbr3 ExpiryTime:2024-07-22 12:50:48 +0000 UTC Type:0 Mac:52:54:00:ce:af:8a Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:embed-certs-802149 Clientid:01:52:54:00:ce:af:8a}
	I0722 11:56:19.439527   59477 main.go:141] libmachine: (embed-certs-802149) DBG | domain embed-certs-802149 has defined IP address 192.168.72.113 and MAC address 52:54:00:ce:af:8a in network mk-embed-certs-802149
	I0722 11:56:19.439666   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHPort
	I0722 11:56:19.439832   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHKeyPath
	I0722 11:56:19.439968   59477 main.go:141] libmachine: (embed-certs-802149) Calling .GetSSHUsername
	I0722 11:56:19.440111   59477 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/embed-certs-802149/id_rsa Username:docker}
	I0722 11:56:19.579586   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:19.613199   59477 node_ready.go:35] waiting up to 6m0s for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621008   59477 node_ready.go:49] node "embed-certs-802149" has status "Ready":"True"
	I0722 11:56:19.621026   59477 node_ready.go:38] duration metric: took 7.803634ms for node "embed-certs-802149" to be "Ready" ...
	I0722 11:56:19.621035   59477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:19.626247   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:17.676844   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.677982   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:19.721316   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:19.751087   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:19.752762   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:19.752782   59477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:19.855879   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:19.855913   59477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:19.929321   59477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:19.929353   59477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:19.985335   59477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:20.449104   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449132   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449106   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449220   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449514   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449514   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449531   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449540   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.449553   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.449566   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.449879   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.449880   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.449902   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450851   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.450865   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.450872   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.450877   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.451078   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.451104   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.451119   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.470273   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.470292   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.470576   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.470623   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.470597   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.627931   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.627953   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628276   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628294   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628293   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.628308   59477 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:20.628317   59477 main.go:141] libmachine: (embed-certs-802149) Calling .Close
	I0722 11:56:20.628560   59477 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:20.628605   59477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:20.628619   59477 addons.go:475] Verifying addon metrics-server=true in "embed-certs-802149"
	I0722 11:56:20.628625   59477 main.go:141] libmachine: (embed-certs-802149) DBG | Closing plugin on server side
	I0722 11:56:20.630168   59477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:20.631410   59477 addons.go:510] duration metric: took 1.257573445s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:21.631628   59477 pod_ready.go:102] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:22.159823   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.159847   59477 pod_ready.go:81] duration metric: took 2.533579062s for pod "coredns-7db6d8ff4d-c2dkr" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.159856   59477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180462   59477 pod_ready.go:92] pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.180487   59477 pod_ready.go:81] duration metric: took 20.623565ms for pod "coredns-7db6d8ff4d-kz8d9" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.180499   59477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194180   59477 pod_ready.go:92] pod "etcd-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.194207   59477 pod_ready.go:81] duration metric: took 13.700217ms for pod "etcd-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.194219   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199321   59477 pod_ready.go:92] pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.199343   59477 pod_ready.go:81] duration metric: took 5.116564ms for pod "kube-apiserver-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.199355   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203845   59477 pod_ready.go:92] pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.203865   59477 pod_ready.go:81] duration metric: took 4.502825ms for pod "kube-controller-manager-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.203875   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529773   59477 pod_ready.go:92] pod "kube-proxy-w89tg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.529797   59477 pod_ready.go:81] duration metric: took 325.914184ms for pod "kube-proxy-w89tg" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.529809   59477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930597   59477 pod_ready.go:92] pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:22.930620   59477 pod_ready.go:81] duration metric: took 400.802915ms for pod "kube-scheduler-embed-certs-802149" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:22.930631   59477 pod_ready.go:38] duration metric: took 3.309586025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:22.930649   59477 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:56:22.930707   59477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:56:22.946660   59477 api_server.go:72] duration metric: took 3.57286966s to wait for apiserver process to appear ...
	I0722 11:56:22.946684   59477 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:56:22.946703   59477 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0722 11:56:22.950940   59477 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0722 11:56:22.951817   59477 api_server.go:141] control plane version: v1.30.3
	I0722 11:56:22.951840   59477 api_server.go:131] duration metric: took 5.148295ms to wait for apiserver health ...
	I0722 11:56:22.951848   59477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:56:23.134122   59477 system_pods.go:59] 9 kube-system pods found
	I0722 11:56:23.134153   59477 system_pods.go:61] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.134159   59477 system_pods.go:61] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.134163   59477 system_pods.go:61] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.134166   59477 system_pods.go:61] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.134169   59477 system_pods.go:61] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.134172   59477 system_pods.go:61] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.134175   59477 system_pods.go:61] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.134181   59477 system_pods.go:61] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.134186   59477 system_pods.go:61] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.134195   59477 system_pods.go:74] duration metric: took 182.340929ms to wait for pod list to return data ...
	I0722 11:56:23.134204   59477 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:56:23.330549   59477 default_sa.go:45] found service account: "default"
	I0722 11:56:23.330573   59477 default_sa.go:55] duration metric: took 196.363183ms for default service account to be created ...
	I0722 11:56:23.330582   59477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:56:23.532750   59477 system_pods.go:86] 9 kube-system pods found
	I0722 11:56:23.532774   59477 system_pods.go:89] "coredns-7db6d8ff4d-c2dkr" [c82a689e-5a99-4889-808f-3e1e199323d8] Running
	I0722 11:56:23.532779   59477 system_pods.go:89] "coredns-7db6d8ff4d-kz8d9" [26d2d65c-aa13-4d94-b091-bf674fee0185] Running
	I0722 11:56:23.532784   59477 system_pods.go:89] "etcd-embed-certs-802149" [a30aead7-f9cf-487f-9d2e-dac877edf07a] Running
	I0722 11:56:23.532788   59477 system_pods.go:89] "kube-apiserver-embed-certs-802149" [b7f50315-5043-4abd-a40e-c2d285b66faa] Running
	I0722 11:56:23.532795   59477 system_pods.go:89] "kube-controller-manager-embed-certs-802149" [e96d4b9d-0bf0-40b4-b483-ba558587fe91] Running
	I0722 11:56:23.532799   59477 system_pods.go:89] "kube-proxy-w89tg" [da4d3074-e552-4c7b-ba0f-f57a3b80f529] Running
	I0722 11:56:23.532802   59477 system_pods.go:89] "kube-scheduler-embed-certs-802149" [5f1d8d32-7564-4d2a-ba97-283754771b15] Running
	I0722 11:56:23.532809   59477 system_pods.go:89] "metrics-server-569cc877fc-88d4n" [b705d674-b431-4946-aa67-871d7d2f9e08] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:56:23.532813   59477 system_pods.go:89] "storage-provisioner" [a68fcb5f-42b5-408e-9c10-d86b14a1b993] Running
	I0722 11:56:23.532821   59477 system_pods.go:126] duration metric: took 202.234836ms to wait for k8s-apps to be running ...
	I0722 11:56:23.532832   59477 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:56:23.532876   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:23.547953   59477 system_svc.go:56] duration metric: took 15.113032ms WaitForService to wait for kubelet
	I0722 11:56:23.547983   59477 kubeadm.go:582] duration metric: took 4.174196727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:56:23.548007   59477 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:56:23.730474   59477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:56:23.730495   59477 node_conditions.go:123] node cpu capacity is 2
	I0722 11:56:23.730505   59477 node_conditions.go:105] duration metric: took 182.492899ms to run NodePressure ...
	I0722 11:56:23.730516   59477 start.go:241] waiting for startup goroutines ...
	I0722 11:56:23.730522   59477 start.go:246] waiting for cluster config update ...
	I0722 11:56:23.730532   59477 start.go:255] writing updated cluster config ...
	I0722 11:56:23.730772   59477 ssh_runner.go:195] Run: rm -f paused
	I0722 11:56:23.780571   59477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:56:23.782536   59477 out.go:177] * Done! kubectl is now configured to use "embed-certs-802149" cluster and "default" namespace by default
	I0722 11:56:22.178416   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:24.676529   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:26.677122   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:29.177390   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:31.677291   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:33.677523   58921 pod_ready.go:102] pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace has status "Ready":"False"
	I0722 11:56:35.170828   58921 pod_ready.go:81] duration metric: took 4m0.000275806s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" ...
	E0722 11:56:35.170855   58921 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-2lbrr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 11:56:35.170871   58921 pod_ready.go:38] duration metric: took 4m13.545311637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:35.170901   58921 kubeadm.go:597] duration metric: took 4m20.764141089s to restartPrimaryControlPlane
	W0722 11:56:35.170949   58921 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 11:56:35.170973   58921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:56:36.176806   60225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.325500952s)
	I0722 11:56:36.176871   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:56:36.193398   60225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:56:36.203561   60225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:56:36.213561   60225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:56:36.213584   60225 kubeadm.go:157] found existing configuration files:
	
	I0722 11:56:36.213654   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 11:56:36.223204   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:56:36.223265   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:56:36.232550   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 11:56:36.241899   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:56:36.241961   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:56:36.252184   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.262462   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:56:36.262518   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:56:36.272942   60225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 11:56:36.282776   60225 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:56:36.282831   60225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:56:36.292375   60225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:56:36.490647   60225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:56:44.713923   60225 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 11:56:44.713975   60225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:56:44.714046   60225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:56:44.714145   60225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:56:44.714255   60225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:56:44.714330   60225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:56:44.715906   60225 out.go:204]   - Generating certificates and keys ...
	I0722 11:56:44.716026   60225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:56:44.716122   60225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:56:44.716247   60225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:56:44.716346   60225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:56:44.716449   60225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:56:44.716530   60225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:56:44.716617   60225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:56:44.716704   60225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:56:44.716820   60225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:56:44.716939   60225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:56:44.717000   60225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:56:44.717078   60225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:56:44.717159   60225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:56:44.717238   60225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:56:44.717312   60225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:56:44.717397   60225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:56:44.717471   60225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:56:44.717594   60225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:56:44.717684   60225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:56:44.719097   60225 out.go:204]   - Booting up control plane ...
	I0722 11:56:44.719201   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:56:44.719288   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:56:44.719387   60225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:56:44.719548   60225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:56:44.719662   60225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:56:44.719698   60225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:56:44.719819   60225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:56:44.719909   60225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:56:44.719969   60225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605769s
	I0722 11:56:44.720047   60225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:56:44.720114   60225 kubeadm.go:310] [api-check] The API server is healthy after 4.501377908s
	I0722 11:56:44.720253   60225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:56:44.720428   60225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:56:44.720522   60225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:56:44.720781   60225 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-605740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:56:44.720842   60225 kubeadm.go:310] [bootstrap-token] Using token: 51n0hg.x5nghdd43rf7nm3m
	I0722 11:56:44.722095   60225 out.go:204]   - Configuring RBAC rules ...
	I0722 11:56:44.722193   60225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:56:44.722266   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:56:44.722405   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:56:44.722575   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:56:44.722695   60225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:56:44.722769   60225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:56:44.722875   60225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:56:44.722916   60225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:56:44.722957   60225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:56:44.722966   60225 kubeadm.go:310] 
	I0722 11:56:44.723046   60225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:56:44.723055   60225 kubeadm.go:310] 
	I0722 11:56:44.723117   60225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:56:44.723123   60225 kubeadm.go:310] 
	I0722 11:56:44.723147   60225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:56:44.723201   60225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:56:44.723244   60225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:56:44.723250   60225 kubeadm.go:310] 
	I0722 11:56:44.723313   60225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:56:44.723324   60225 kubeadm.go:310] 
	I0722 11:56:44.723374   60225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:56:44.723387   60225 kubeadm.go:310] 
	I0722 11:56:44.723462   60225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:56:44.723568   60225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:56:44.723624   60225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:56:44.723630   60225 kubeadm.go:310] 
	I0722 11:56:44.723703   60225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:56:44.723762   60225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:56:44.723768   60225 kubeadm.go:310] 
	I0722 11:56:44.723832   60225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.723935   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:56:44.723960   60225 kubeadm.go:310] 	--control-plane 
	I0722 11:56:44.723966   60225 kubeadm.go:310] 
	I0722 11:56:44.724035   60225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:56:44.724041   60225 kubeadm.go:310] 
	I0722 11:56:44.724109   60225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 51n0hg.x5nghdd43rf7nm3m \
	I0722 11:56:44.724210   60225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:56:44.724222   60225 cni.go:84] Creating CNI manager for ""
	I0722 11:56:44.724231   60225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:56:44.725651   60225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:56:44.726843   60225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:56:44.737856   60225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:56:44.756687   60225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:56:44.756772   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:44.756790   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-605740 minikube.k8s.io/updated_at=2024_07_22T11_56_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=default-k8s-diff-port-605740 minikube.k8s.io/primary=true
	I0722 11:56:44.782416   60225 ops.go:34] apiserver oom_adj: -16
	I0722 11:56:44.957801   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.458616   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:45.958542   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.458436   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:46.957908   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.458058   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:47.958520   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.457970   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:48.958357   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.457964   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:49.958236   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.458547   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:50.958594   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.457865   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:51.958297   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.458486   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:52.957877   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.458199   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:53.958331   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.458178   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:54.958725   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.458619   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:55.958861   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.458294   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:56.958145   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.458414   60225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:56:57.566568   60225 kubeadm.go:1113] duration metric: took 12.809852518s to wait for elevateKubeSystemPrivileges
	I0722 11:56:57.566604   60225 kubeadm.go:394] duration metric: took 5m14.748062926s to StartCluster
	I0722 11:56:57.566626   60225 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.566709   60225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:56:57.568307   60225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:56:57.568592   60225 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:56:57.568648   60225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:56:57.568731   60225 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568765   60225 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568778   60225 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:56:57.568777   60225 config.go:182] Loaded profile config "default-k8s-diff-port-605740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:56:57.568765   60225 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568775   60225 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-605740"
	I0722 11:56:57.568811   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.568813   60225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-605740"
	I0722 11:56:57.568819   60225 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.568828   60225 addons.go:243] addon metrics-server should already be in state true
	I0722 11:56:57.568849   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.569145   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569170   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569187   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569191   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.569216   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.569265   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.570171   60225 out.go:177] * Verifying Kubernetes components...
	I0722 11:56:57.571536   60225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:56:57.585174   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0722 11:56:57.585655   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.586149   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.586174   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.586532   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.587082   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.587135   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.588871   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0722 11:56:57.588968   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0722 11:56:57.589289   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589398   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.589785   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589809   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.589875   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.589898   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.590183   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590223   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.590393   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.590860   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.590906   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.594024   60225 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-605740"
	W0722 11:56:57.594046   60225 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:56:57.594074   60225 host.go:66] Checking if "default-k8s-diff-port-605740" exists ...
	I0722 11:56:57.594755   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.594794   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.604913   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0722 11:56:57.605449   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.606000   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.606017   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.606459   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0722 11:56:57.606768   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.606871   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.607129   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.607259   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.607273   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.607591   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.607779   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.609472   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609513   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0722 11:56:57.609611   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.609857   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.610299   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.610314   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.610552   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.611030   60225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:56:57.611066   60225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:56:57.611075   60225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:56:57.611086   60225 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:56:57.612333   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:56:57.612352   60225 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:56:57.612373   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.612449   60225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.612463   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:56:57.612480   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.615359   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.615950   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.615979   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616137   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.616288   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.616341   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.616503   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.616636   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.616806   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.616830   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.617016   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.617204   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.617433   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.617587   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.627323   60225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0722 11:56:57.627674   60225 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:56:57.628110   60225 main.go:141] libmachine: Using API Version  1
	I0722 11:56:57.628129   60225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:56:57.628426   60225 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:56:57.628581   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetState
	I0722 11:56:57.630063   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .DriverName
	I0722 11:56:57.630250   60225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.630264   60225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:56:57.630276   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHHostname
	I0722 11:56:57.633223   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633589   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:45:e9", ip: ""} in network mk-default-k8s-diff-port-605740: {Iface:virbr4 ExpiryTime:2024-07-22 12:51:27 +0000 UTC Type:0 Mac:52:54:00:23:45:e9 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:default-k8s-diff-port-605740 Clientid:01:52:54:00:23:45:e9}
	I0722 11:56:57.633652   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | domain default-k8s-diff-port-605740 has defined IP address 192.168.39.87 and MAC address 52:54:00:23:45:e9 in network mk-default-k8s-diff-port-605740
	I0722 11:56:57.633864   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHPort
	I0722 11:56:57.634041   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHKeyPath
	I0722 11:56:57.634208   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .GetSSHUsername
	I0722 11:56:57.634349   60225 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/default-k8s-diff-port-605740/id_rsa Username:docker}
	I0722 11:56:57.800318   60225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:56:57.838800   60225 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858375   60225 node_ready.go:49] node "default-k8s-diff-port-605740" has status "Ready":"True"
	I0722 11:56:57.858401   60225 node_ready.go:38] duration metric: took 19.564389ms for node "default-k8s-diff-port-605740" to be "Ready" ...
	I0722 11:56:57.858412   60225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:56:57.864271   60225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891296   60225 pod_ready.go:92] pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.891327   60225 pod_ready.go:81] duration metric: took 27.02499ms for pod "etcd-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.891341   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904548   60225 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.904572   60225 pod_ready.go:81] duration metric: took 13.223985ms for pod "kube-apiserver-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.904582   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.922071   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:56:57.922090   60225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:56:57.936115   60225 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:56:57.936135   60225 pod_ready.go:81] duration metric: took 31.547556ms for pod "kube-controller-manager-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.936144   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:56:57.956826   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:56:57.959831   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:56:57.970183   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:56:57.970209   60225 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:56:58.023756   60225 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.023783   60225 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:56:58.132167   60225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:56:58.836074   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836101   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836129   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836151   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836444   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836480   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836489   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836496   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836507   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836603   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.836635   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836645   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.836653   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.836660   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.836797   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.836809   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838425   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.838441   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.838457   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:58.855236   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:58.855255   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:58.855533   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:58.855551   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:58.855558   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133028   60225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.000816157s)
	I0722 11:56:59.133092   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133108   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133395   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133412   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133420   60225 main.go:141] libmachine: Making call to close driver server
	I0722 11:56:59.133428   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) Calling .Close
	I0722 11:56:59.133715   60225 main.go:141] libmachine: (default-k8s-diff-port-605740) DBG | Closing plugin on server side
	I0722 11:56:59.133744   60225 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:56:59.133766   60225 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:56:59.133788   60225 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-605740"
	I0722 11:56:59.135326   60225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:56:59.136408   60225 addons.go:510] duration metric: took 1.567760763s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:56:59.942782   60225 pod_ready.go:102] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:00.442434   60225 pod_ready.go:92] pod "kube-proxy-58qcp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.442455   60225 pod_ready.go:81] duration metric: took 2.50630376s for pod "kube-proxy-58qcp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.442463   60225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446225   60225 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:00.446246   60225 pod_ready.go:81] duration metric: took 3.778284ms for pod "kube-scheduler-default-k8s-diff-port-605740" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:00.446254   60225 pod_ready.go:38] duration metric: took 2.58782997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:00.446267   60225 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:00.446310   60225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:00.461412   60225 api_server.go:72] duration metric: took 2.892790415s to wait for apiserver process to appear ...
	I0722 11:57:00.461431   60225 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:00.461448   60225 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8444/healthz ...
	I0722 11:57:00.465904   60225 api_server.go:279] https://192.168.39.87:8444/healthz returned 200:
	ok
	I0722 11:57:00.466558   60225 api_server.go:141] control plane version: v1.30.3
	I0722 11:57:00.466577   60225 api_server.go:131] duration metric: took 5.13931ms to wait for apiserver health ...
	I0722 11:57:00.466585   60225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:00.471230   60225 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:00.471254   60225 system_pods.go:61] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.471260   60225 system_pods.go:61] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.471265   60225 system_pods.go:61] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.471270   60225 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.471274   60225 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.471279   60225 system_pods.go:61] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.471283   60225 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.471293   60225 system_pods.go:61] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.471299   60225 system_pods.go:61] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.471309   60225 system_pods.go:74] duration metric: took 4.717009ms to wait for pod list to return data ...
	I0722 11:57:00.471320   60225 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:00.642325   60225 default_sa.go:45] found service account: "default"
	I0722 11:57:00.642356   60225 default_sa.go:55] duration metric: took 171.03007ms for default service account to be created ...
	I0722 11:57:00.642365   60225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:00.846043   60225 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:00.846071   60225 system_pods.go:89] "coredns-7db6d8ff4d-nlfgl" [c02dda63-e71b-429f-b9d5-0b2ca40e8dcc] Running
	I0722 11:57:00.846079   60225 system_pods.go:89] "coredns-7db6d8ff4d-tnnxf" [337c6df7-035c-488d-a123-a410d76d836b] Running
	I0722 11:57:00.846083   60225 system_pods.go:89] "etcd-default-k8s-diff-port-605740" [d1cda641-22de-4bda-ac2b-ed0a92bbde9f] Running
	I0722 11:57:00.846087   60225 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-605740" [b3c4fc30-392e-40db-8be3-fea337a40ca5] Running
	I0722 11:57:00.846092   60225 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-605740" [7cddd0d3-487d-47f5-9c6d-873c5731170e] Running
	I0722 11:57:00.846096   60225 system_pods.go:89] "kube-proxy-58qcp" [25c02c70-a840-410c-9d48-3d15a3927a77] Running
	I0722 11:57:00.846100   60225 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-605740" [b7678aec-927b-40c2-a91e-4c0444c59a90] Running
	I0722 11:57:00.846106   60225 system_pods.go:89] "metrics-server-569cc877fc-2xv7x" [7ef89c55-cb8e-46bd-ba95-7ba2eef36b7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:00.846110   60225 system_pods.go:89] "storage-provisioner" [c4ff4a3e-008c-4c4e-9eb3-281c46b10279] Running
	I0722 11:57:00.846118   60225 system_pods.go:126] duration metric: took 203.748606ms to wait for k8s-apps to be running ...
	I0722 11:57:00.846124   60225 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:00.846168   60225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:00.867261   60225 system_svc.go:56] duration metric: took 21.130025ms WaitForService to wait for kubelet
	I0722 11:57:00.867290   60225 kubeadm.go:582] duration metric: took 3.298668854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:00.867314   60225 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:01.042201   60225 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:01.042226   60225 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:01.042237   60225 node_conditions.go:105] duration metric: took 174.91764ms to run NodePressure ...
	I0722 11:57:01.042249   60225 start.go:241] waiting for startup goroutines ...
	I0722 11:57:01.042256   60225 start.go:246] waiting for cluster config update ...
	I0722 11:57:01.042268   60225 start.go:255] writing updated cluster config ...
	I0722 11:57:01.042526   60225 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:01.090643   60225 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 11:57:01.092526   60225 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-605740" cluster and "default" namespace by default
	I0722 11:57:01.339755   58921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.168752701s)
	I0722 11:57:01.339827   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:01.368833   58921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 11:57:01.392011   58921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:01.403725   58921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:01.403746   58921 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:01.403795   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:01.421922   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:01.422011   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:01.434303   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:01.445095   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:01.445154   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:01.464906   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.475002   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:01.475074   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:01.484493   58921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:01.493467   58921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:01.493523   58921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:01.502496   58921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:01.550079   58921 kubeadm.go:310] W0722 11:57:01.524041    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.551819   58921 kubeadm.go:310] W0722 11:57:01.525728    2933 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 11:57:01.670102   58921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:57:10.497048   58921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 11:57:10.497168   58921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:10.497273   58921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:10.497381   58921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:10.497498   58921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 11:57:10.497555   58921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:10.498805   58921 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:10.498905   58921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:10.498982   58921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:10.499087   58921 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:10.499182   58921 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:10.499265   58921 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:10.499326   58921 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:10.499385   58921 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:10.499500   58921 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:10.499633   58921 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:10.499724   58921 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:10.499784   58921 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:10.499840   58921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:10.499892   58921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:10.499982   58921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 11:57:10.500064   58921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:10.500155   58921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:10.500241   58921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:10.500343   58921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:10.500442   58921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:10.501847   58921 out.go:204]   - Booting up control plane ...
	I0722 11:57:10.501931   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:10.501995   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:10.502068   58921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:10.502203   58921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:10.502318   58921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:10.502367   58921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:10.502477   58921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 11:57:10.502541   58921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 11:57:10.502599   58921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501448538s
	I0722 11:57:10.502660   58921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 11:57:10.502712   58921 kubeadm.go:310] [api-check] The API server is healthy after 5.001578291s
	I0722 11:57:10.502801   58921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 11:57:10.502914   58921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 11:57:10.502962   58921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 11:57:10.503159   58921 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-339929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 11:57:10.503211   58921 kubeadm.go:310] [bootstrap-token] Using token: ivof4z.0tnj9kdw05524oxn
	I0722 11:57:10.504409   58921 out.go:204]   - Configuring RBAC rules ...
	I0722 11:57:10.504501   58921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 11:57:10.504616   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 11:57:10.504780   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 11:57:10.504970   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 11:57:10.505144   58921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 11:57:10.505257   58921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 11:57:10.505410   58921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 11:57:10.505471   58921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 11:57:10.505538   58921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 11:57:10.505546   58921 kubeadm.go:310] 
	I0722 11:57:10.505631   58921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 11:57:10.505649   58921 kubeadm.go:310] 
	I0722 11:57:10.505755   58921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 11:57:10.505764   58921 kubeadm.go:310] 
	I0722 11:57:10.505804   58921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 11:57:10.505897   58921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 11:57:10.505972   58921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 11:57:10.505982   58921 kubeadm.go:310] 
	I0722 11:57:10.506059   58921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 11:57:10.506067   58921 kubeadm.go:310] 
	I0722 11:57:10.506128   58921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 11:57:10.506136   58921 kubeadm.go:310] 
	I0722 11:57:10.506205   58921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 11:57:10.506306   58921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 11:57:10.506414   58921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 11:57:10.506423   58921 kubeadm.go:310] 
	I0722 11:57:10.506520   58921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 11:57:10.506617   58921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 11:57:10.506626   58921 kubeadm.go:310] 
	I0722 11:57:10.506742   58921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.506885   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 \
	I0722 11:57:10.506922   58921 kubeadm.go:310] 	--control-plane 
	I0722 11:57:10.506931   58921 kubeadm.go:310] 
	I0722 11:57:10.507044   58921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 11:57:10.507057   58921 kubeadm.go:310] 
	I0722 11:57:10.507156   58921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ivof4z.0tnj9kdw05524oxn \
	I0722 11:57:10.507309   58921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b01b7ff61de3588bab9ae2cb9ac5d8186929543757ea11978c028e88bb8483e0 
	I0722 11:57:10.507321   58921 cni.go:84] Creating CNI manager for ""
	I0722 11:57:10.507330   58921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 11:57:10.508685   58921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 11:57:10.509747   58921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 11:57:10.520250   58921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 11:57:10.540094   58921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 11:57:10.540196   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:10.540212   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-339929 minikube.k8s.io/updated_at=2024_07_22T11_57_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=no-preload-339929 minikube.k8s.io/primary=true
	I0722 11:57:10.763453   58921 ops.go:34] apiserver oom_adj: -16
	I0722 11:57:10.763505   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.264268   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:11.764311   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.264344   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:12.764563   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.264149   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:13.764260   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.263595   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:14.763794   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.263787   58921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 11:57:15.343777   58921 kubeadm.go:1113] duration metric: took 4.803631766s to wait for elevateKubeSystemPrivileges
	I0722 11:57:15.343817   58921 kubeadm.go:394] duration metric: took 5m0.988139889s to StartCluster
	I0722 11:57:15.343840   58921 settings.go:142] acquiring lock: {Name:mkf7478b24488b186c4641b3d55c9f3cb539e068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.343940   58921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:57:15.345913   58921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19313-5960/kubeconfig: {Name:mk89e62d5d10525cae33a0e02c13f1b70b021f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 11:57:15.346216   58921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 11:57:15.346387   58921 config.go:182] Loaded profile config "no-preload-339929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 11:57:15.346343   58921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 11:57:15.346441   58921 addons.go:69] Setting storage-provisioner=true in profile "no-preload-339929"
	I0722 11:57:15.346454   58921 addons.go:69] Setting metrics-server=true in profile "no-preload-339929"
	I0722 11:57:15.346483   58921 addons.go:234] Setting addon metrics-server=true in "no-preload-339929"
	W0722 11:57:15.346491   58921 addons.go:243] addon metrics-server should already be in state true
	I0722 11:57:15.346485   58921 addons.go:234] Setting addon storage-provisioner=true in "no-preload-339929"
	W0722 11:57:15.346502   58921 addons.go:243] addon storage-provisioner should already be in state true
	I0722 11:57:15.346515   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346529   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.346445   58921 addons.go:69] Setting default-storageclass=true in profile "no-preload-339929"
	I0722 11:57:15.346600   58921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-339929"
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346920   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.346890   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.346994   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.347007   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347025   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.347928   58921 out.go:177] * Verifying Kubernetes components...
	I0722 11:57:15.352932   58921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 11:57:15.362633   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0722 11:57:15.362665   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0722 11:57:15.362630   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0722 11:57:15.363041   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363053   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363133   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.363521   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363537   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363544   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363558   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363568   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.363587   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.363905   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.363945   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364078   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.364104   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.364485   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364517   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.364602   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.364629   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.367146   58921 addons.go:234] Setting addon default-storageclass=true in "no-preload-339929"
	W0722 11:57:15.367170   58921 addons.go:243] addon default-storageclass should already be in state true
	I0722 11:57:15.367197   58921 host.go:66] Checking if "no-preload-339929" exists ...
	I0722 11:57:15.367419   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.367436   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.380125   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0722 11:57:15.380393   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0722 11:57:15.380557   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.380972   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.381545   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381546   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.381570   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381585   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.381956   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.381987   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.382133   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.382152   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.383766   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.383925   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.384000   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0722 11:57:15.384347   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.384833   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.384856   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.385195   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.385635   58921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:57:15.385664   58921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:57:15.386055   58921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 11:57:15.386060   58921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 11:57:15.387105   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 11:57:15.387119   58921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 11:57:15.387138   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.387186   58921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.387197   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 11:57:15.387215   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.390591   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390928   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.390975   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.390996   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391233   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391366   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.391387   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.391423   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391599   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.391632   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.391802   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.391841   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.391986   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.392111   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.401709   58921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0722 11:57:15.402082   58921 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:57:15.402543   58921 main.go:141] libmachine: Using API Version  1
	I0722 11:57:15.402563   58921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:57:15.402854   58921 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:57:15.403074   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetState
	I0722 11:57:15.404406   58921 main.go:141] libmachine: (no-preload-339929) Calling .DriverName
	I0722 11:57:15.404603   58921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.404617   58921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 11:57:15.404633   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHHostname
	I0722 11:57:15.407332   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.407829   58921 main.go:141] libmachine: (no-preload-339929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:72:69", ip: ""} in network mk-no-preload-339929: {Iface:virbr1 ExpiryTime:2024-07-22 12:51:47 +0000 UTC Type:0 Mac:52:54:00:8d:72:69 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:no-preload-339929 Clientid:01:52:54:00:8d:72:69}
	I0722 11:57:15.407853   58921 main.go:141] libmachine: (no-preload-339929) DBG | domain no-preload-339929 has defined IP address 192.168.61.112 and MAC address 52:54:00:8d:72:69 in network mk-no-preload-339929
	I0722 11:57:15.408041   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHPort
	I0722 11:57:15.408218   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHKeyPath
	I0722 11:57:15.408356   58921 main.go:141] libmachine: (no-preload-339929) Calling .GetSSHUsername
	I0722 11:57:15.408491   58921 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/no-preload-339929/id_rsa Username:docker}
	I0722 11:57:15.550538   58921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 11:57:15.568066   58921 node_ready.go:35] waiting up to 6m0s for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577034   58921 node_ready.go:49] node "no-preload-339929" has status "Ready":"True"
	I0722 11:57:15.577054   58921 node_ready.go:38] duration metric: took 8.96328ms for node "no-preload-339929" to be "Ready" ...
	I0722 11:57:15.577062   58921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:15.587213   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:15.629092   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 11:57:15.714856   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 11:57:15.714885   58921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 11:57:15.746923   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 11:57:15.781300   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 11:57:15.781327   58921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 11:57:15.842787   58921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:15.842816   58921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 11:57:15.884901   58921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 11:57:16.165926   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.165955   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166184   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166200   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166255   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166296   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166315   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166329   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166340   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166454   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166497   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166520   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.166542   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.166581   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166595   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.166551   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166519   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.166954   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.166969   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199171   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.199196   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.199533   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.199558   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.199573   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.678992   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679015   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679366   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679389   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679400   58921 main.go:141] libmachine: Making call to close driver server
	I0722 11:57:16.679400   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679408   58921 main.go:141] libmachine: (no-preload-339929) Calling .Close
	I0722 11:57:16.679658   58921 main.go:141] libmachine: (no-preload-339929) DBG | Closing plugin on server side
	I0722 11:57:16.679699   58921 main.go:141] libmachine: Successfully made call to close driver server
	I0722 11:57:16.679708   58921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 11:57:16.679719   58921 addons.go:475] Verifying addon metrics-server=true in "no-preload-339929"
	I0722 11:57:16.681483   58921 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 11:57:16.682888   58921 addons.go:510] duration metric: took 1.336544744s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 11:57:17.596659   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:20.093596   58921 pod_ready.go:102] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"False"
	I0722 11:57:24.750495   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:57:24.750641   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:57:24.752309   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:57:24.752368   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:57:24.752499   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:57:24.752662   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:57:24.752788   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:57:24.752851   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:57:24.754464   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:57:24.754528   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:57:24.754595   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:57:24.754712   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:57:24.754926   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:57:24.755033   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:57:24.755114   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:57:24.755188   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:57:24.755276   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:57:24.755374   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:57:24.755472   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:57:24.755513   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:57:24.755561   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:57:24.755606   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:57:24.755647   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:57:24.755700   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:57:24.755742   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:57:24.755836   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:57:24.755950   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:57:24.755986   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:57:24.756089   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:57:24.757395   59674 out.go:204]   - Booting up control plane ...
	I0722 11:57:24.757482   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:57:24.757566   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:57:24.757657   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:57:24.757905   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:57:24.758131   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:57:24.758205   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:57:24.758311   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758565   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758650   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.758852   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.758957   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759153   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759217   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759412   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759495   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:57:24.759688   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:57:24.759696   59674 kubeadm.go:310] 
	I0722 11:57:24.759729   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:57:24.759791   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:57:24.759812   59674 kubeadm.go:310] 
	I0722 11:57:24.759868   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:57:24.759903   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:57:24.760077   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:57:24.760094   59674 kubeadm.go:310] 
	I0722 11:57:24.760245   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:57:24.760300   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:57:24.760350   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:57:24.760363   59674 kubeadm.go:310] 
	I0722 11:57:24.760534   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:57:24.760640   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:57:24.760654   59674 kubeadm.go:310] 
	I0722 11:57:24.760819   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:57:24.760902   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:57:24.761013   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:57:24.761124   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:57:24.761213   59674 kubeadm.go:310] 
	W0722 11:57:24.761263   59674 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 11:57:24.761321   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 11:57:25.222130   59674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.236593   59674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 11:57:25.247009   59674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 11:57:25.247026   59674 kubeadm.go:157] found existing configuration files:
	
	I0722 11:57:25.247078   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 11:57:25.256617   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 11:57:25.256674   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 11:57:25.265950   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 11:57:25.275080   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 11:57:25.275133   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 11:57:25.285058   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.294015   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 11:57:25.294070   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 11:57:25.304009   59674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 11:57:25.313492   59674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 11:57:25.313565   59674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 11:57:25.322903   59674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 11:57:22.593478   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.593498   58921 pod_ready.go:81] duration metric: took 7.006267885s for pod "coredns-5cfdc65f69-vg4wp" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.593505   58921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598122   58921 pod_ready.go:92] pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.598149   58921 pod_ready.go:81] duration metric: took 4.631196ms for pod "coredns-5cfdc65f69-xxf6t" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.598159   58921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602448   58921 pod_ready.go:92] pod "etcd-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.602466   58921 pod_ready.go:81] duration metric: took 4.300795ms for pod "etcd-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.602474   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607921   58921 pod_ready.go:92] pod "kube-apiserver-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:22.607940   58921 pod_ready.go:81] duration metric: took 5.46066ms for pod "kube-apiserver-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:22.607951   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114900   58921 pod_ready.go:92] pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.114929   58921 pod_ready.go:81] duration metric: took 1.506968399s for pod "kube-controller-manager-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.114942   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190875   58921 pod_ready.go:92] pod "kube-proxy-b5xwg" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.190895   58921 pod_ready.go:81] duration metric: took 75.947595ms for pod "kube-proxy-b5xwg" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.190905   58921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.590994   58921 pod_ready.go:92] pod "kube-scheduler-no-preload-339929" in "kube-system" namespace has status "Ready":"True"
	I0722 11:57:24.591020   58921 pod_ready.go:81] duration metric: took 400.108088ms for pod "kube-scheduler-no-preload-339929" in "kube-system" namespace to be "Ready" ...
	I0722 11:57:24.591029   58921 pod_ready.go:38] duration metric: took 9.013958119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 11:57:24.591051   58921 api_server.go:52] waiting for apiserver process to appear ...
	I0722 11:57:24.591110   58921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:57:24.609675   58921 api_server.go:72] duration metric: took 9.263421304s to wait for apiserver process to appear ...
	I0722 11:57:24.609701   58921 api_server.go:88] waiting for apiserver healthz status ...
	I0722 11:57:24.609719   58921 api_server.go:253] Checking apiserver healthz at https://192.168.61.112:8443/healthz ...
	I0722 11:57:24.613446   58921 api_server.go:279] https://192.168.61.112:8443/healthz returned 200:
	ok
	I0722 11:57:24.614282   58921 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 11:57:24.614301   58921 api_server.go:131] duration metric: took 4.591983ms to wait for apiserver health ...
	I0722 11:57:24.614310   58921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 11:57:24.796872   58921 system_pods.go:59] 9 kube-system pods found
	I0722 11:57:24.796910   58921 system_pods.go:61] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:24.796917   58921 system_pods.go:61] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:24.796922   58921 system_pods.go:61] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:24.796927   58921 system_pods.go:61] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:24.796933   58921 system_pods.go:61] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:24.796940   58921 system_pods.go:61] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:24.796944   58921 system_pods.go:61] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:24.796953   58921 system_pods.go:61] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:24.796960   58921 system_pods.go:61] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:24.796973   58921 system_pods.go:74] duration metric: took 182.655813ms to wait for pod list to return data ...
	I0722 11:57:24.796985   58921 default_sa.go:34] waiting for default service account to be created ...
	I0722 11:57:24.992009   58921 default_sa.go:45] found service account: "default"
	I0722 11:57:24.992032   58921 default_sa.go:55] duration metric: took 195.040103ms for default service account to be created ...
	I0722 11:57:24.992040   58921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 11:57:25.196738   58921 system_pods.go:86] 9 kube-system pods found
	I0722 11:57:25.196763   58921 system_pods.go:89] "coredns-5cfdc65f69-vg4wp" [3556f321-9c0a-437f-a06e-4eca4b07781d] Running
	I0722 11:57:25.196768   58921 system_pods.go:89] "coredns-5cfdc65f69-xxf6t" [6e933cad-a95a-47c4-b8b9-89205619fb70] Running
	I0722 11:57:25.196772   58921 system_pods.go:89] "etcd-no-preload-339929" [6cd32101-c444-44a3-b024-708228c1b2de] Running
	I0722 11:57:25.196777   58921 system_pods.go:89] "kube-apiserver-no-preload-339929" [cf127459-d105-4ed0-9b65-653e9353e123] Running
	I0722 11:57:25.196781   58921 system_pods.go:89] "kube-controller-manager-no-preload-339929" [11b4341e-5d2d-41c2-a078-6345546aa418] Running
	I0722 11:57:25.196785   58921 system_pods.go:89] "kube-proxy-b5xwg" [6ec19ad2-170e-4402-bcb7-ebf14a2537ce] Running
	I0722 11:57:25.196789   58921 system_pods.go:89] "kube-scheduler-no-preload-339929" [bb0b4b2e-ba9c-4521-bcf9-67230983dc8e] Running
	I0722 11:57:25.196795   58921 system_pods.go:89] "metrics-server-78fcd8795b-9vzx2" [bb2ae44c-3190-4025-8f2e-e236c52da27e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 11:57:25.196799   58921 system_pods.go:89] "storage-provisioner" [f56d91d7-a252-485d-936d-3f44804d26ec] Running
	I0722 11:57:25.196806   58921 system_pods.go:126] duration metric: took 204.761601ms to wait for k8s-apps to be running ...
	I0722 11:57:25.196813   58921 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 11:57:25.196855   58921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:57:25.217589   58921 system_svc.go:56] duration metric: took 20.766557ms WaitForService to wait for kubelet
	I0722 11:57:25.217619   58921 kubeadm.go:582] duration metric: took 9.871369454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 11:57:25.217641   58921 node_conditions.go:102] verifying NodePressure condition ...
	I0722 11:57:25.395091   58921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 11:57:25.395116   58921 node_conditions.go:123] node cpu capacity is 2
	I0722 11:57:25.395128   58921 node_conditions.go:105] duration metric: took 177.480389ms to run NodePressure ...
	I0722 11:57:25.395143   58921 start.go:241] waiting for startup goroutines ...
	I0722 11:57:25.395159   58921 start.go:246] waiting for cluster config update ...
	I0722 11:57:25.395173   58921 start.go:255] writing updated cluster config ...
	I0722 11:57:25.395623   58921 ssh_runner.go:195] Run: rm -f paused
	I0722 11:57:25.449438   58921 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 11:57:25.450840   58921 out.go:177] * Done! kubectl is now configured to use "no-preload-339929" cluster and "default" namespace by default
	I0722 11:57:25.545662   59674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 11:59:21.714624   59674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 11:59:21.714729   59674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 11:59:21.716617   59674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 11:59:21.716683   59674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 11:59:21.716771   59674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 11:59:21.716939   59674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 11:59:21.717077   59674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 11:59:21.717136   59674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 11:59:21.718742   59674 out.go:204]   - Generating certificates and keys ...
	I0722 11:59:21.718837   59674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 11:59:21.718927   59674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 11:59:21.718995   59674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 11:59:21.719065   59674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 11:59:21.719140   59674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 11:59:21.719187   59674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 11:59:21.719251   59674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 11:59:21.719329   59674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 11:59:21.719408   59674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 11:59:21.719497   59674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 11:59:21.719538   59674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 11:59:21.719592   59674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 11:59:21.719635   59674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 11:59:21.719680   59674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 11:59:21.719745   59674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 11:59:21.719823   59674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 11:59:21.719970   59674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 11:59:21.720056   59674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 11:59:21.720090   59674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 11:59:21.720147   59674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 11:59:21.721505   59674 out.go:204]   - Booting up control plane ...
	I0722 11:59:21.721586   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 11:59:21.721656   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 11:59:21.721712   59674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 11:59:21.721778   59674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 11:59:21.721923   59674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 11:59:21.721988   59674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 11:59:21.722045   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722201   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722272   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722431   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722488   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722658   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722730   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.722885   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.722943   59674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 11:59:21.723110   59674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 11:59:21.723118   59674 kubeadm.go:310] 
	I0722 11:59:21.723154   59674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 11:59:21.723192   59674 kubeadm.go:310] 		timed out waiting for the condition
	I0722 11:59:21.723198   59674 kubeadm.go:310] 
	I0722 11:59:21.723226   59674 kubeadm.go:310] 	This error is likely caused by:
	I0722 11:59:21.723255   59674 kubeadm.go:310] 		- The kubelet is not running
	I0722 11:59:21.723339   59674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 11:59:21.723346   59674 kubeadm.go:310] 
	I0722 11:59:21.723442   59674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 11:59:21.723495   59674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 11:59:21.723537   59674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 11:59:21.723546   59674 kubeadm.go:310] 
	I0722 11:59:21.723709   59674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 11:59:21.723823   59674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 11:59:21.723833   59674 kubeadm.go:310] 
	I0722 11:59:21.723941   59674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 11:59:21.724023   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 11:59:21.724086   59674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 11:59:21.724156   59674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 11:59:21.724197   59674 kubeadm.go:310] 
	I0722 11:59:21.724212   59674 kubeadm.go:394] duration metric: took 7m57.831193066s to StartCluster
	I0722 11:59:21.724246   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 11:59:21.724296   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 11:59:21.771578   59674 cri.go:89] found id: ""
	I0722 11:59:21.771611   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.771622   59674 logs.go:278] No container was found matching "kube-apiserver"
	I0722 11:59:21.771631   59674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 11:59:21.771694   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 11:59:21.809027   59674 cri.go:89] found id: ""
	I0722 11:59:21.809055   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.809065   59674 logs.go:278] No container was found matching "etcd"
	I0722 11:59:21.809071   59674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 11:59:21.809143   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 11:59:21.844667   59674 cri.go:89] found id: ""
	I0722 11:59:21.844690   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.844698   59674 logs.go:278] No container was found matching "coredns"
	I0722 11:59:21.844703   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 11:59:21.844754   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 11:59:21.888054   59674 cri.go:89] found id: ""
	I0722 11:59:21.888078   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.888086   59674 logs.go:278] No container was found matching "kube-scheduler"
	I0722 11:59:21.888091   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 11:59:21.888150   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 11:59:21.931688   59674 cri.go:89] found id: ""
	I0722 11:59:21.931711   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.931717   59674 logs.go:278] No container was found matching "kube-proxy"
	I0722 11:59:21.931722   59674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 11:59:21.931775   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 11:59:21.974044   59674 cri.go:89] found id: ""
	I0722 11:59:21.974074   59674 logs.go:276] 0 containers: []
	W0722 11:59:21.974095   59674 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 11:59:21.974102   59674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 11:59:21.974170   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 11:59:22.010302   59674 cri.go:89] found id: ""
	I0722 11:59:22.010326   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.010334   59674 logs.go:278] No container was found matching "kindnet"
	I0722 11:59:22.010338   59674 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 11:59:22.010385   59674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 11:59:22.047170   59674 cri.go:89] found id: ""
	I0722 11:59:22.047201   59674 logs.go:276] 0 containers: []
	W0722 11:59:22.047212   59674 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 11:59:22.047224   59674 logs.go:123] Gathering logs for container status ...
	I0722 11:59:22.047237   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 11:59:22.086648   59674 logs.go:123] Gathering logs for kubelet ...
	I0722 11:59:22.086678   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 11:59:22.141255   59674 logs.go:123] Gathering logs for dmesg ...
	I0722 11:59:22.141288   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 11:59:22.157063   59674 logs.go:123] Gathering logs for describe nodes ...
	I0722 11:59:22.157095   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 11:59:22.244259   59674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 11:59:22.244284   59674 logs.go:123] Gathering logs for CRI-O ...
	I0722 11:59:22.244300   59674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 11:59:22.357489   59674 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 11:59:22.357536   59674 out.go:239] * 
	W0722 11:59:22.357600   59674 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.357622   59674 out.go:239] * 
	W0722 11:59:22.358374   59674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 11:59:22.361655   59674 out.go:177] 
	W0722 11:59:22.362800   59674 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 11:59:22.362845   59674 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 11:59:22.362860   59674 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 11:59:22.364239   59674 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.254059595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650270254001428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bc86ae4-c149-4ee0-a383-2cab3e603903 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.254673827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33a8912e-fdfa-4c58-ab45-a18ec89cae5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.254737050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33a8912e-fdfa-4c58-ab45-a18ec89cae5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.254774997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33a8912e-fdfa-4c58-ab45-a18ec89cae5a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.284901921Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b20d8e80-e893-47ef-a6ab-5030b587d6da name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.285014404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b20d8e80-e893-47ef-a6ab-5030b587d6da name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.286328877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=024b1e54-64c7-42e7-9a48-f4d401ced485 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.286842560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650270286805585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=024b1e54-64c7-42e7-9a48-f4d401ced485 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.287322979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08f1da54-61b5-4b0e-a380-d22dbfe2d024 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.287440378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08f1da54-61b5-4b0e-a380-d22dbfe2d024 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.287477754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=08f1da54-61b5-4b0e-a380-d22dbfe2d024 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.318127217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f66d108d-3b3e-4978-a98d-3030b53e2afc name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.318199720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f66d108d-3b3e-4978-a98d-3030b53e2afc name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.319173881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7beeffdf-e742-4942-af67-c5673f007acb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.319627650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650270319609225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7beeffdf-e742-4942-af67-c5673f007acb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.320230750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=503148e4-692a-44ef-8e6d-7ba33f948fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.320284146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=503148e4-692a-44ef-8e6d-7ba33f948fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.320315091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=503148e4-692a-44ef-8e6d-7ba33f948fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.352752046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e403bdf9-3f23-4b4b-97b5-5a4aa1bb5f99 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.352838847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e403bdf9-3f23-4b4b-97b5-5a4aa1bb5f99 name=/runtime.v1.RuntimeService/Version
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.353740844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=899e864a-eaaa-41b8-b63f-2b631136d4bd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.354179443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721650270354146104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=899e864a-eaaa-41b8-b63f-2b631136d4bd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.354713455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd3c80ed-6c8c-49d1-bcdf-9af6a02f5513 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.354759585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd3c80ed-6c8c-49d1-bcdf-9af6a02f5513 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 12:11:10 old-k8s-version-101261 crio[646]: time="2024-07-22 12:11:10.354793486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cd3c80ed-6c8c-49d1-bcdf-9af6a02f5513 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050630] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664885] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.301657] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.299545] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.059053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064893] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.225240] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.133946] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.249574] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +5.972877] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.060881] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.615774] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +12.639328] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 11:55] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Jul22 11:57] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.065899] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:11:10 up 20 min,  0 users,  load average: 0.00, 0.04, 0.04
	Linux old-k8s-version-101261 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: net.(*Dialer).DialContext(0xc0002c1740, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7aae0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009eda20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7aae0, 0x24, 0x60, 0x7fe47505b480, 0x118, ...)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: net/http.(*Transport).dial(0xc0002aaf00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7aae0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: net/http.(*Transport).dialConn(0xc0002aaf00, 0x4f7fe00, 0xc000052030, 0x0, 0xc000b7d500, 0x5, 0xc000b7aae0, 0x24, 0x0, 0xc000b03c20, ...)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: net/http.(*Transport).dialConnFor(0xc0002aaf00, 0xc000bc8000)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: created by net/http.(*Transport).queueForDial
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: goroutine 175 [select]:
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000bbca20, 0xc000a59380, 0xc000b7d7a0, 0xc000b7d740)
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]: created by net.(*netFD).connect
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6821]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jul 22 12:11:09 old-k8s-version-101261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Jul 22 12:11:09 old-k8s-version-101261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 12:11:09 old-k8s-version-101261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6848]: I0722 12:11:09.809605    6848 server.go:416] Version: v1.20.0
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6848]: I0722 12:11:09.810385    6848 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6848]: I0722 12:11:09.815941    6848 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6848]: W0722 12:11:09.817605    6848 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 22 12:11:09 old-k8s-version-101261 kubelet[6848]: I0722 12:11:09.817702    6848 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 2 (237.760477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-101261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (163.00s)

                                                
                                    

Test pass (256/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 6.94
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.05
18 TestDownloadOnly/v1.30.3/DeleteAll 0.12
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.11
21 TestDownloadOnly/v1.31.0-beta.0/json-events 5.81
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.12
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.11
30 TestBinaryMirror 0.55
31 TestOffline 95.39
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 137.06
38 TestAddons/parallel/Registry 14.95
40 TestAddons/parallel/InspektorGadget 12.08
42 TestAddons/parallel/HelmTiller 10.76
44 TestAddons/parallel/CSI 68.9
45 TestAddons/parallel/Headlamp 13.97
46 TestAddons/parallel/CloudSpanner 5.69
47 TestAddons/parallel/LocalPath 54.27
48 TestAddons/parallel/NvidiaDevicePlugin 6.68
49 TestAddons/parallel/Yakd 5
53 TestAddons/serial/GCPAuth/Namespaces 0.11
55 TestCertOptions 43.99
56 TestCertExpiration 277.64
58 TestForceSystemdFlag 66.39
59 TestForceSystemdEnv 69.98
61 TestKVMDriverInstallOrUpdate 1.1
65 TestErrorSpam/setup 39.72
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.55
69 TestErrorSpam/unpause 1.51
70 TestErrorSpam/stop 5.91
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 58.6
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 26.77
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
82 TestFunctional/serial/CacheCmd/cache/add_local 0.98
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 35.17
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.31
93 TestFunctional/serial/LogsFileCmd 1.4
94 TestFunctional/serial/InvalidService 4.6
96 TestFunctional/parallel/ConfigCmd 0.34
97 TestFunctional/parallel/DashboardCmd 9.91
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.74
104 TestFunctional/parallel/ServiceCmdConnect 11.44
105 TestFunctional/parallel/AddonsCmd 0.12
108 TestFunctional/parallel/SSHCmd 0.45
109 TestFunctional/parallel/CpCmd 1.32
110 TestFunctional/parallel/MySQL 22.57
111 TestFunctional/parallel/FileSync 0.23
112 TestFunctional/parallel/CertSync 1.22
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
120 TestFunctional/parallel/License 0.14
121 TestFunctional/parallel/ServiceCmd/DeployApp 16.2
131 TestFunctional/parallel/Version/short 0.05
132 TestFunctional/parallel/Version/components 0.69
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.33
138 TestFunctional/parallel/ImageCommands/Setup 0.39
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.18
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
147 TestFunctional/parallel/ProfileCmd/profile_list 0.25
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
149 TestFunctional/parallel/MountCmd/any-port 7.46
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
153 TestFunctional/parallel/ServiceCmd/List 0.88
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.87
155 TestFunctional/parallel/MountCmd/specific-port 2.02
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
157 TestFunctional/parallel/ServiceCmd/Format 0.39
158 TestFunctional/parallel/ServiceCmd/URL 0.37
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 201.33
167 TestMultiControlPlane/serial/DeployApp 5.71
168 TestMultiControlPlane/serial/PingHostFromPods 1.16
169 TestMultiControlPlane/serial/AddWorkerNode 55.11
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.41
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.15
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 327.55
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 72.66
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
188 TestJSONOutput/start/Command 57.53
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.72
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.37
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 90.11
220 TestMountStart/serial/StartWithMountFirst 24
221 TestMountStart/serial/VerifyMountFirst 0.35
222 TestMountStart/serial/StartWithMountSecond 26.5
223 TestMountStart/serial/VerifyMountSecond 0.35
224 TestMountStart/serial/DeleteFirst 0.68
225 TestMountStart/serial/VerifyMountPostDelete 0.35
226 TestMountStart/serial/Stop 1.26
227 TestMountStart/serial/RestartStopped 21.98
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 114.42
232 TestMultiNode/serial/DeployApp2Nodes 3.61
233 TestMultiNode/serial/PingHostFrom2Pods 0.8
234 TestMultiNode/serial/AddNode 45.43
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.84
238 TestMultiNode/serial/StopNode 2.23
239 TestMultiNode/serial/StartAfterStop 37.61
241 TestMultiNode/serial/DeleteNode 2.39
243 TestMultiNode/serial/RestartMultiNode 178.15
244 TestMultiNode/serial/ValidateNameConflict 43.94
251 TestScheduledStopUnix 112.37
255 TestRunningBinaryUpgrade 215.24
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 95.67
270 TestPause/serial/Start 125.62
271 TestNoKubernetes/serial/StartWithStopK8s 63.18
272 TestNoKubernetes/serial/Start 25.84
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
274 TestNoKubernetes/serial/ProfileList 1.24
275 TestNoKubernetes/serial/Stop 1.27
276 TestNoKubernetes/serial/StartNoArgs 20.86
277 TestPause/serial/SecondStartNoReconfiguration 48.65
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
286 TestNetworkPlugins/group/false 3.12
290 TestStoppedBinaryUpgrade/Setup 0.54
291 TestStoppedBinaryUpgrade/Upgrade 122.45
292 TestPause/serial/Pause 0.73
293 TestPause/serial/VerifyStatus 0.25
294 TestPause/serial/Unpause 0.64
295 TestPause/serial/PauseAgain 1.03
296 TestPause/serial/DeletePaused 1.03
297 TestPause/serial/VerifyDeletedResources 0.4
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
302 TestStartStop/group/no-preload/serial/FirstStart 112.31
303 TestStartStop/group/no-preload/serial/DeployApp 9.28
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
307 TestStartStop/group/embed-certs/serial/FirstStart 59.39
308 TestStartStop/group/embed-certs/serial/DeployApp 9.32
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.33
316 TestStartStop/group/no-preload/serial/SecondStart 685.32
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
321 TestStartStop/group/embed-certs/serial/SecondStart 539.42
322 TestStartStop/group/old-k8s-version/serial/Stop 4.32
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 465.63
336 TestStartStop/group/newest-cni/serial/FirstStart 47.35
337 TestNetworkPlugins/group/auto/Start 107.59
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
340 TestStartStop/group/newest-cni/serial/Stop 11.34
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
342 TestStartStop/group/newest-cni/serial/SecondStart 35.44
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
346 TestStartStop/group/newest-cni/serial/Pause 2.36
347 TestNetworkPlugins/group/kindnet/Start 73.44
348 TestNetworkPlugins/group/auto/KubeletFlags 0.21
349 TestNetworkPlugins/group/auto/NetCatPod 11.25
350 TestNetworkPlugins/group/auto/DNS 0.16
351 TestNetworkPlugins/group/auto/Localhost 0.13
352 TestNetworkPlugins/group/auto/HairPin 0.12
353 TestNetworkPlugins/group/calico/Start 81.68
354 TestNetworkPlugins/group/custom-flannel/Start 105.37
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
357 TestNetworkPlugins/group/kindnet/NetCatPod 13.23
358 TestNetworkPlugins/group/kindnet/DNS 0.18
359 TestNetworkPlugins/group/kindnet/Localhost 0.15
360 TestNetworkPlugins/group/kindnet/HairPin 0.14
361 TestNetworkPlugins/group/enable-default-cni/Start 59.61
362 TestNetworkPlugins/group/flannel/Start 83.09
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.35
365 TestNetworkPlugins/group/calico/NetCatPod 11.95
366 TestNetworkPlugins/group/calico/DNS 0.17
367 TestNetworkPlugins/group/calico/Localhost 0.14
368 TestNetworkPlugins/group/calico/HairPin 0.12
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.29
373 TestNetworkPlugins/group/bridge/Start 64.07
374 TestNetworkPlugins/group/custom-flannel/DNS 0.23
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
377 TestNetworkPlugins/group/enable-default-cni/DNS 26.2
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
382 TestNetworkPlugins/group/flannel/NetCatPod 10.24
383 TestNetworkPlugins/group/flannel/DNS 0.19
384 TestNetworkPlugins/group/flannel/Localhost 0.12
385 TestNetworkPlugins/group/flannel/HairPin 0.12
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
387 TestNetworkPlugins/group/bridge/NetCatPod 11.19
388 TestNetworkPlugins/group/bridge/DNS 0.13
389 TestNetworkPlugins/group/bridge/Localhost 0.12
390 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (15.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-451721 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-451721 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.661427033s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-451721
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-451721: exit status 85 (56.741978ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:28 UTC |          |
	|         | -p download-only-451721        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:28:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:28:49.340963   13110 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:28:49.341205   13110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:28:49.341213   13110 out.go:304] Setting ErrFile to fd 2...
	I0722 10:28:49.341217   13110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:28:49.341404   13110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	W0722 10:28:49.341517   13110 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19313-5960/.minikube/config/config.json: open /home/jenkins/minikube-integration/19313-5960/.minikube/config/config.json: no such file or directory
	I0722 10:28:49.342023   13110 out.go:298] Setting JSON to true
	I0722 10:28:49.342893   13110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":681,"bootTime":1721643448,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:28:49.342946   13110 start.go:139] virtualization: kvm guest
	I0722 10:28:49.344999   13110 out.go:97] [download-only-451721] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0722 10:28:49.345089   13110 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball: no such file or directory
	I0722 10:28:49.345109   13110 notify.go:220] Checking for updates...
	I0722 10:28:49.346501   13110 out.go:169] MINIKUBE_LOCATION=19313
	I0722 10:28:49.347738   13110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:28:49.348927   13110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:28:49.350301   13110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:28:49.351533   13110 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0722 10:28:49.353665   13110 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 10:28:49.353912   13110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:28:49.449983   13110 out.go:97] Using the kvm2 driver based on user configuration
	I0722 10:28:49.450015   13110 start.go:297] selected driver: kvm2
	I0722 10:28:49.450040   13110 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:28:49.450408   13110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:28:49.450550   13110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:28:49.465246   13110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:28:49.465290   13110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:28:49.465858   13110 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0722 10:28:49.466047   13110 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 10:28:49.466109   13110 cni.go:84] Creating CNI manager for ""
	I0722 10:28:49.466127   13110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:28:49.466139   13110 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 10:28:49.466209   13110 start.go:340] cluster config:
	{Name:download-only-451721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-451721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:28:49.466434   13110 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:28:49.468212   13110 out.go:97] Downloading VM boot image ...
	I0722 10:28:49.468264   13110 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 10:28:59.704239   13110 out.go:97] Starting "download-only-451721" primary control-plane node in "download-only-451721" cluster
	I0722 10:28:59.704260   13110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 10:28:59.726196   13110 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 10:28:59.726212   13110 cache.go:56] Caching tarball of preloaded images
	I0722 10:28:59.726334   13110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 10:28:59.727789   13110 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0722 10:28:59.727800   13110 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0722 10:28:59.749456   13110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:03.598209   13110 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0722 10:29:03.598301   13110 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-451721 host does not exist
	  To start a cluster, run: "minikube start -p download-only-451721"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-451721
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-832339 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-832339 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.940567973s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-832339
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-832339: exit status 85 (54.365917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:28 UTC |                     |
	|         | -p download-only-451721        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-451721        | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| start   | -o=json --download-only        | download-only-832339 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | -p download-only-832339        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:29:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:29:05.313674   13335 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:29:05.313781   13335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:05.313790   13335 out.go:304] Setting ErrFile to fd 2...
	I0722 10:29:05.313794   13335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:05.313956   13335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:29:05.314459   13335 out.go:298] Setting JSON to true
	I0722 10:29:05.315307   13335 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":697,"bootTime":1721643448,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:29:05.315362   13335 start.go:139] virtualization: kvm guest
	I0722 10:29:05.317431   13335 out.go:97] [download-only-832339] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:29:05.317559   13335 notify.go:220] Checking for updates...
	I0722 10:29:05.318948   13335 out.go:169] MINIKUBE_LOCATION=19313
	I0722 10:29:05.320195   13335 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:29:05.321375   13335 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:29:05.322581   13335 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:05.323845   13335 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0722 10:29:05.326064   13335 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 10:29:05.326261   13335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:29:05.356424   13335 out.go:97] Using the kvm2 driver based on user configuration
	I0722 10:29:05.356461   13335 start.go:297] selected driver: kvm2
	I0722 10:29:05.356471   13335 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:29:05.356813   13335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:05.356888   13335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:29:05.371133   13335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:29:05.371166   13335 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:29:05.371602   13335 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0722 10:29:05.371731   13335 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 10:29:05.371777   13335 cni.go:84] Creating CNI manager for ""
	I0722 10:29:05.371789   13335 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:05.371800   13335 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 10:29:05.371849   13335 start.go:340] cluster config:
	{Name:download-only-832339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-832339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:05.371933   13335 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:05.373449   13335 out.go:97] Starting "download-only-832339" primary control-plane node in "download-only-832339" cluster
	I0722 10:29:05.373470   13335 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:05.399565   13335 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:05.399590   13335 cache.go:56] Caching tarball of preloaded images
	I0722 10:29:05.399725   13335 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 10:29:05.401254   13335 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0722 10:29:05.401269   13335 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0722 10:29:05.427740   13335 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-832339 host does not exist
	  To start a cluster, run: "minikube start -p download-only-832339"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-832339
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (5.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196061 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196061 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.809909553s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (5.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196061
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196061: exit status 85 (54.706818ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:28 UTC |                     |
	|         | -p download-only-451721             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-451721             | download-only-451721 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| start   | -o=json --download-only             | download-only-832339 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | -p download-only-832339             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| delete  | -p download-only-832339             | download-only-832339 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC | 22 Jul 24 10:29 UTC |
	| start   | -o=json --download-only             | download-only-196061 | jenkins | v1.33.1 | 22 Jul 24 10:29 UTC |                     |
	|         | -p download-only-196061             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 10:29:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 10:29:12.543132   13539 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:29:12.543254   13539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:12.543264   13539 out.go:304] Setting ErrFile to fd 2...
	I0722 10:29:12.543270   13539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:29:12.543423   13539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:29:12.543955   13539 out.go:298] Setting JSON to true
	I0722 10:29:12.544745   13539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":704,"bootTime":1721643448,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:29:12.544800   13539 start.go:139] virtualization: kvm guest
	I0722 10:29:12.546729   13539 out.go:97] [download-only-196061] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:29:12.546868   13539 notify.go:220] Checking for updates...
	I0722 10:29:12.548220   13539 out.go:169] MINIKUBE_LOCATION=19313
	I0722 10:29:12.549545   13539 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:29:12.550776   13539 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:29:12.551921   13539 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:29:12.553192   13539 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0722 10:29:12.555570   13539 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 10:29:12.555767   13539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:29:12.586773   13539 out.go:97] Using the kvm2 driver based on user configuration
	I0722 10:29:12.586790   13539 start.go:297] selected driver: kvm2
	I0722 10:29:12.586800   13539 start.go:901] validating driver "kvm2" against <nil>
	I0722 10:29:12.587089   13539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:12.587147   13539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19313-5960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 10:29:12.601537   13539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 10:29:12.601585   13539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 10:29:12.602180   13539 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0722 10:29:12.602399   13539 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 10:29:12.602429   13539 cni.go:84] Creating CNI manager for ""
	I0722 10:29:12.602438   13539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 10:29:12.602447   13539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 10:29:12.602508   13539 start.go:340] cluster config:
	{Name:download-only-196061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-196061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:29:12.602634   13539 iso.go:125] acquiring lock: {Name:mkff6d703f1e15441ac9b34db115c0580bd0e3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 10:29:12.604004   13539 out.go:97] Starting "download-only-196061" primary control-plane node in "download-only-196061" cluster
	I0722 10:29:12.604021   13539 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 10:29:12.624977   13539 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:12.624995   13539 cache.go:56] Caching tarball of preloaded images
	I0722 10:29:12.625114   13539 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 10:29:12.626407   13539 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0722 10:29:12.626419   13539 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0722 10:29:12.656811   13539 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0722 10:29:17.048921   13539 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0722 10:29:17.049028   13539 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19313-5960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-196061 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196061"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-196061
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-224708 --alsologtostderr --binary-mirror http://127.0.0.1:42063 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-224708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-224708
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (95.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-508112 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-508112 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.359710518s)
helpers_test.go:175: Cleaning up "offline-crio-508112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-508112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-508112: (1.029139857s)
--- PASS: TestOffline (95.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-362127
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-362127: exit status 85 (48.158587ms)

                                                
                                                
-- stdout --
	* Profile "addons-362127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-362127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-362127
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-362127: exit status 85 (49.261266ms)

                                                
                                                
-- stdout --
	* Profile "addons-362127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-362127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (137.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-362127 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-362127 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m17.055417585s)
--- PASS: TestAddons/Setup (137.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 26.227217ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-4sfgx" [b3bc8b0a-e99b-4bf9-aed3-da909aeab28c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007290396s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7tgcs" [30014df8-8abc-48a5-85ce-7a4ab5e79732] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005304913s
addons_test.go:342: (dbg) Run:  kubectl --context addons-362127 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-362127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-362127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.182937827s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 ip
2024/07/22 10:31:50 [DEBUG] GET http://192.168.39.147:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8ttmh" [99ad3062-5e80-4536-a57e-85a775d7fd15] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00357203s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-362127
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-362127: (6.072330727s)
--- PASS: TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 26.212895ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-89cmg" [4311f07e-4fde-45b6-ab03-28badd1c17a1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006989401s
addons_test.go:475: (dbg) Run:  kubectl --context addons-362127 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-362127 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.158613224s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.852091ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-362127 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-362127 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [78e1f825-e83e-4034-bca5-288d9c80688c] Pending
helpers_test.go:344: "task-pv-pod" [78e1f825-e83e-4034-bca5-288d9c80688c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [78e1f825-e83e-4034-bca5-288d9c80688c] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003758532s
addons_test.go:586: (dbg) Run:  kubectl --context addons-362127 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-362127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-362127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-362127 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-362127 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-362127 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-362127 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bcc45406-6441-4d37-a46d-77eee483270a] Pending
helpers_test.go:344: "task-pv-pod-restore" [bcc45406-6441-4d37-a46d-77eee483270a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bcc45406-6441-4d37-a46d-77eee483270a] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004005865s
addons_test.go:628: (dbg) Run:  kubectl --context addons-362127 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-362127 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-362127 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.700364969s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-362127 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-25xv5" [a1da9ddd-aa30-431f-8b6d-4f19b1f7d384] Pending
helpers_test.go:344: "headlamp-7867546754-25xv5" [a1da9ddd-aa30-431f-8b6d-4f19b1f7d384] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-25xv5" [a1da9ddd-aa30-431f-8b6d-4f19b1f7d384] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004194316s
--- PASS: TestAddons/parallel/Headlamp (13.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-6gbtf" [580301db-bb7f-4606-8bc1-0990fc0eb801] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003090905s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-362127
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-362127 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-362127 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a90e98c0-537f-41ca-be5b-a112c7b82a28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a90e98c0-537f-41ca-be5b-a112c7b82a28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a90e98c0-537f-41ca-be5b-a112c7b82a28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003737371s
addons_test.go:992: (dbg) Run:  kubectl --context addons-362127 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 ssh "cat /opt/local-path-provisioner/pvc-bc269bbf-3c8b-4d86-a8aa-8acec54e004a_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-362127 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-362127 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-362127 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-362127 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.361014521s)
--- PASS: TestAddons/parallel/LocalPath (54.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2k5sr" [2de5556d-cd17-43f7-ba1d-8cc5e131883f] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004590381s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-362127
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6h47n" [75bce171-cade-4a90-afba-510f2e9fb3ce] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003734494s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-362127 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-362127 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (43.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-435680 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-435680 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (42.739642496s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-435680 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-435680 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-435680 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-435680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-435680
--- PASS: TestCertOptions (43.99s)

                                                
                                    
x
+
TestCertExpiration (277.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-467176 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-467176 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m8.049541041s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-467176 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-467176 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.81110087s)
helpers_test.go:175: Cleaning up "cert-expiration-467176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-467176
--- PASS: TestCertExpiration (277.64s)

                                                
                                    
x
+
TestForceSystemdFlag (66.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-989072 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-989072 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.212762004s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-989072 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-989072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-989072
--- PASS: TestForceSystemdFlag (66.39s)

                                                
                                    
x
+
TestForceSystemdEnv (69.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-601497 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-601497 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.177601763s)
helpers_test.go:175: Cleaning up "force-systemd-env-601497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-601497
E0722 11:36:36.611297   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
--- PASS: TestForceSystemdEnv (69.98s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.10s)

                                                
                                    
x
+
TestErrorSpam/setup (39.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-522318 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-522318 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-522318 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-522318 --driver=kvm2  --container-runtime=crio: (39.721496872s)
--- PASS: TestErrorSpam/setup (39.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (5.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop: (2.275373982s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop: (1.609042102s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-522318 --log_dir /tmp/nospam-522318 stop: (2.020416995s)
--- PASS: TestErrorSpam/stop (5.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19313-5960/.minikube/files/etc/test/nested/copy/13098/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0722 10:41:36.610880   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.617262   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.627404   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.647673   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.687959   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.768269   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:36.928661   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:37.249226   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:37.890100   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:39.170577   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:41.731619   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:46.852714   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:41:57.093512   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-941610 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.604245072s)
--- PASS: TestFunctional/serial/StartWithProxy (58.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --alsologtostderr -v=8
E0722 10:42:17.574185   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-941610 --alsologtostderr -v=8: (26.76556118s)
functional_test.go:659: soft start took 26.766149439s for "functional-941610" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-941610 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:3.1: (1.016493291s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:3.3: (1.083929763s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 cache add registry.k8s.io/pause:latest: (1.100788407s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-941610 /tmp/TestFunctionalserialCacheCmdcacheadd_local1959868224/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache add minikube-local-cache-test:functional-941610
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache delete minikube-local-cache-test:functional-941610
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-941610
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.475199ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 kubectl -- --context functional-941610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-941610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0722 10:42:58.535687   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-941610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.171660624s)
functional_test.go:757: restart took 35.171784058s for "functional-941610" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-941610 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 logs: (1.307434354s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 logs --file /tmp/TestFunctionalserialLogsFileCmd14678201/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 logs --file /tmp/TestFunctionalserialLogsFileCmd14678201/001/logs.txt: (1.397024014s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-941610 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-941610
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-941610: exit status 115 (265.057665ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.245:31893 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-941610 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-941610 delete -f testdata/invalidsvc.yaml: (1.137424283s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 config get cpus: exit status 14 (72.123335ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 config get cpus: exit status 14 (44.9011ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-941610 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-941610 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22345: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-941610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (125.542312ms)

                                                
                                                
-- stdout --
	* [functional-941610] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:43:40.934956   22030 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:43:40.935200   22030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:40.935209   22030 out.go:304] Setting ErrFile to fd 2...
	I0722 10:43:40.935213   22030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:40.935417   22030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:43:40.935954   22030 out.go:298] Setting JSON to false
	I0722 10:43:40.936826   22030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1573,"bootTime":1721643448,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:43:40.936887   22030 start.go:139] virtualization: kvm guest
	I0722 10:43:40.939007   22030 out.go:177] * [functional-941610] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 10:43:40.940295   22030 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:43:40.940329   22030 notify.go:220] Checking for updates...
	I0722 10:43:40.942612   22030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:43:40.943781   22030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:43:40.945008   22030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:43:40.946198   22030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:43:40.947245   22030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:43:40.948830   22030 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:43:40.949382   22030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:40.949434   22030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:40.964227   22030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0722 10:43:40.964587   22030 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:40.965149   22030 main.go:141] libmachine: Using API Version  1
	I0722 10:43:40.965173   22030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:40.965461   22030 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:40.965639   22030 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:40.965835   22030 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:43:40.966136   22030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:40.966170   22030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:40.979820   22030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0722 10:43:40.980108   22030 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:40.980503   22030 main.go:141] libmachine: Using API Version  1
	I0722 10:43:40.980518   22030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:40.980836   22030 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:40.981015   22030 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:41.012311   22030 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 10:43:41.013473   22030 start.go:297] selected driver: kvm2
	I0722 10:43:41.013496   22030 start.go:901] validating driver "kvm2" against &{Name:functional-941610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-941610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:43:41.013610   22030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:43:41.015629   22030 out.go:177] 
	W0722 10:43:41.016817   22030 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0722 10:43:41.017899   22030 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-941610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-941610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.243079ms)

                                                
                                                
-- stdout --
	* [functional-941610] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 10:43:41.199573   22085 out.go:291] Setting OutFile to fd 1 ...
	I0722 10:43:41.199719   22085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:41.199729   22085 out.go:304] Setting ErrFile to fd 2...
	I0722 10:43:41.199735   22085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 10:43:41.200065   22085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 10:43:41.200727   22085 out.go:298] Setting JSON to false
	I0722 10:43:41.201862   22085 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1573,"bootTime":1721643448,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 10:43:41.201935   22085 start.go:139] virtualization: kvm guest
	I0722 10:43:41.204326   22085 out.go:177] * [functional-941610] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0722 10:43:41.206194   22085 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 10:43:41.206222   22085 notify.go:220] Checking for updates...
	I0722 10:43:41.208700   22085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 10:43:41.210000   22085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 10:43:41.211365   22085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 10:43:41.212603   22085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 10:43:41.213853   22085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 10:43:41.215370   22085 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 10:43:41.215928   22085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:41.215976   22085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:41.231322   22085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41649
	I0722 10:43:41.231616   22085 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:41.232099   22085 main.go:141] libmachine: Using API Version  1
	I0722 10:43:41.232120   22085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:41.232425   22085 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:41.232614   22085 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:41.232845   22085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 10:43:41.233103   22085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 10:43:41.233142   22085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 10:43:41.247056   22085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0722 10:43:41.247414   22085 main.go:141] libmachine: () Calling .GetVersion
	I0722 10:43:41.247828   22085 main.go:141] libmachine: Using API Version  1
	I0722 10:43:41.247846   22085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 10:43:41.248116   22085 main.go:141] libmachine: () Calling .GetMachineName
	I0722 10:43:41.248296   22085 main.go:141] libmachine: (functional-941610) Calling .DriverName
	I0722 10:43:41.279017   22085 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0722 10:43:41.280135   22085 start.go:297] selected driver: kvm2
	I0722 10:43:41.280149   22085 start.go:901] validating driver "kvm2" against &{Name:functional-941610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-941610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 10:43:41.280252   22085 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 10:43:41.282199   22085 out.go:177] 
	W0722 10:43:41.283390   22085 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0722 10:43:41.284526   22085 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-941610 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-941610 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-5ggcb" [ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-5ggcb" [ed1a2a0a-a1d2-45e3-9f35-66a17a9aef6f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003872023s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.245:30408
functional_test.go:1671: http://192.168.39.245:30408: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-5ggcb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.245:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.245:30408
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh -n functional-941610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cp functional-941610:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd985456840/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh -n functional-941610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh -n functional-941610 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-941610 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-k7ctz" [191e5936-c1f0-47ef-963d-12a1c34225bb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-k7ctz" [191e5936-c1f0-47ef-963d-12a1c34225bb] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004292013s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-941610 exec mysql-64454c8b5c-k7ctz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-941610 exec mysql-64454c8b5c-k7ctz -- mysql -ppassword -e "show databases;": exit status 1 (169.568283ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-941610 exec mysql-64454c8b5c-k7ctz -- mysql -ppassword -e "show databases;"
E0722 10:44:20.456044   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MySQL (22.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13098/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /etc/test/nested/copy/13098/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13098.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /etc/ssl/certs/13098.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13098.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /usr/share/ca-certificates/13098.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/130982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /etc/ssl/certs/130982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/130982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /usr/share/ca-certificates/130982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-941610 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "sudo systemctl is-active docker": exit status 1 (256.81302ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "sudo systemctl is-active containerd": exit status 1 (245.542871ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-941610 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-941610 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-l4649" [09010f1f-bbb2-43b5-8d3b-c4498b226b0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-l4649" [09010f1f-bbb2-43b5-8d3b-c4498b226b0b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.003604828s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941610 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-941610
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-941610
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941610 image ls --format short --alsologtostderr:
I0722 10:43:50.800080   23085 out.go:291] Setting OutFile to fd 1 ...
I0722 10:43:50.800250   23085 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:50.800272   23085 out.go:304] Setting ErrFile to fd 2...
I0722 10:43:50.800288   23085 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:50.800753   23085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
I0722 10:43:50.801333   23085 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:50.801435   23085 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:50.801798   23085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:50.801838   23085 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:50.816428   23085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
I0722 10:43:50.816925   23085 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:50.817474   23085 main.go:141] libmachine: Using API Version  1
I0722 10:43:50.817496   23085 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:50.817873   23085 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:50.818083   23085 main.go:141] libmachine: (functional-941610) Calling .GetState
I0722 10:43:50.819832   23085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:50.819874   23085 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:50.833774   23085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
I0722 10:43:50.834130   23085 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:50.834560   23085 main.go:141] libmachine: Using API Version  1
I0722 10:43:50.834575   23085 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:50.834895   23085 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:50.835072   23085 main.go:141] libmachine: (functional-941610) Calling .DriverName
I0722 10:43:50.835277   23085 ssh_runner.go:195] Run: systemctl --version
I0722 10:43:50.835317   23085 main.go:141] libmachine: (functional-941610) Calling .GetSSHHostname
I0722 10:43:50.838289   23085 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:50.838820   23085 main.go:141] libmachine: (functional-941610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e5:24", ip: ""} in network mk-functional-941610: {Iface:virbr1 ExpiryTime:2024-07-22 11:41:28 +0000 UTC Type:0 Mac:52:54:00:2e:e5:24 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-941610 Clientid:01:52:54:00:2e:e5:24}
I0722 10:43:50.838847   23085 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined IP address 192.168.39.245 and MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:50.838981   23085 main.go:141] libmachine: (functional-941610) Calling .GetSSHPort
I0722 10:43:50.839154   23085 main.go:141] libmachine: (functional-941610) Calling .GetSSHKeyPath
I0722 10:43:50.839305   23085 main.go:141] libmachine: (functional-941610) Calling .GetSSHUsername
I0722 10:43:50.839446   23085 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/functional-941610/id_rsa Username:docker}
I0722 10:43:50.974086   23085 ssh_runner.go:195] Run: sudo crictl images --output json
I0722 10:43:51.019476   23085 main.go:141] libmachine: Making call to close driver server
I0722 10:43:51.019492   23085 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:51.019723   23085 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:51.019733   23085 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:51.019745   23085 main.go:141] libmachine: Making call to close driver server
I0722 10:43:51.019752   23085 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:51.019772   23085 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:51.019963   23085 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:51.019980   23085 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:51.020000   23085 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941610 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| localhost/minikube-local-cache-test     | functional-941610  | 63d618772da56 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| docker.io/kicbase/echo-server           | functional-941610  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941610 image ls --format table --alsologtostderr:
I0722 10:43:52.859352   23225 out.go:291] Setting OutFile to fd 1 ...
I0722 10:43:52.859944   23225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:52.859969   23225 out.go:304] Setting ErrFile to fd 2...
I0722 10:43:52.859978   23225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:52.860457   23225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
I0722 10:43:52.861466   23225 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:52.861570   23225 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:52.861900   23225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:52.861944   23225 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:52.877096   23225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
I0722 10:43:52.877596   23225 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:52.878279   23225 main.go:141] libmachine: Using API Version  1
I0722 10:43:52.878311   23225 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:52.878606   23225 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:52.878770   23225 main.go:141] libmachine: (functional-941610) Calling .GetState
I0722 10:43:52.880633   23225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:52.880674   23225 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:52.894595   23225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
I0722 10:43:52.895039   23225 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:52.895562   23225 main.go:141] libmachine: Using API Version  1
I0722 10:43:52.895604   23225 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:52.895953   23225 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:52.896133   23225 main.go:141] libmachine: (functional-941610) Calling .DriverName
I0722 10:43:52.896341   23225 ssh_runner.go:195] Run: systemctl --version
I0722 10:43:52.896370   23225 main.go:141] libmachine: (functional-941610) Calling .GetSSHHostname
I0722 10:43:52.898887   23225 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:52.899234   23225 main.go:141] libmachine: (functional-941610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e5:24", ip: ""} in network mk-functional-941610: {Iface:virbr1 ExpiryTime:2024-07-22 11:41:28 +0000 UTC Type:0 Mac:52:54:00:2e:e5:24 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-941610 Clientid:01:52:54:00:2e:e5:24}
I0722 10:43:52.899271   23225 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined IP address 192.168.39.245 and MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:52.899426   23225 main.go:141] libmachine: (functional-941610) Calling .GetSSHPort
I0722 10:43:52.899593   23225 main.go:141] libmachine: (functional-941610) Calling .GetSSHKeyPath
I0722 10:43:52.899732   23225 main.go:141] libmachine: (functional-941610) Calling .GetSSHUsername
I0722 10:43:52.899893   23225 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/functional-941610/id_rsa Username:docker}
I0722 10:43:53.031383   23225 ssh_runner.go:195] Run: sudo crictl images --output json
I0722 10:43:53.192645   23225 main.go:141] libmachine: Making call to close driver server
I0722 10:43:53.192663   23225 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:53.192920   23225 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:53.192939   23225 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:53.192969   23225 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:53.192972   23225 main.go:141] libmachine: Making call to close driver server
I0722 10:43:53.193054   23225 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:53.193286   23225 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:53.193300   23225 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:53.193317   23225 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941610 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"63d618772da56738b8dc265fd6f5fffb87f5910faa510a9ca238448653d22840","repoDigests":["localhost/minikube-local-cache-test@sha256:e8066c4ecaa7fb3ac9291c4f158e8b0640b551e2f2ddbaf3adec216cd20795e8"],"repoTags":["localhost/minikube-local-cache-test:functional-941610"],"size":"3330"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-
apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"0184c1613d92931126feb4
c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-941610"],"size":"4943877"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c24475
6cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c
0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha25
6:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d4498
41ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941610 image ls --format json --alsologtostderr:
I0722 10:43:52.466087   23202 out.go:291] Setting OutFile to fd 1 ...
I0722 10:43:52.466377   23202 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:52.466389   23202 out.go:304] Setting ErrFile to fd 2...
I0722 10:43:52.466395   23202 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:52.466695   23202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
I0722 10:43:52.467479   23202 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:52.467637   23202 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:52.468310   23202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:52.468378   23202 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:52.484252   23202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
I0722 10:43:52.484757   23202 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:52.485420   23202 main.go:141] libmachine: Using API Version  1
I0722 10:43:52.485451   23202 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:52.485868   23202 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:52.486141   23202 main.go:141] libmachine: (functional-941610) Calling .GetState
I0722 10:43:52.488148   23202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:52.488198   23202 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:52.504209   23202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
I0722 10:43:52.504661   23202 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:52.505191   23202 main.go:141] libmachine: Using API Version  1
I0722 10:43:52.505217   23202 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:52.505546   23202 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:52.505741   23202 main.go:141] libmachine: (functional-941610) Calling .DriverName
I0722 10:43:52.505941   23202 ssh_runner.go:195] Run: systemctl --version
I0722 10:43:52.505975   23202 main.go:141] libmachine: (functional-941610) Calling .GetSSHHostname
I0722 10:43:52.508851   23202 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:52.509345   23202 main.go:141] libmachine: (functional-941610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e5:24", ip: ""} in network mk-functional-941610: {Iface:virbr1 ExpiryTime:2024-07-22 11:41:28 +0000 UTC Type:0 Mac:52:54:00:2e:e5:24 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-941610 Clientid:01:52:54:00:2e:e5:24}
I0722 10:43:52.509375   23202 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined IP address 192.168.39.245 and MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:52.509516   23202 main.go:141] libmachine: (functional-941610) Calling .GetSSHPort
I0722 10:43:52.509668   23202 main.go:141] libmachine: (functional-941610) Calling .GetSSHKeyPath
I0722 10:43:52.509843   23202 main.go:141] libmachine: (functional-941610) Calling .GetSSHUsername
I0722 10:43:52.509989   23202 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/functional-941610/id_rsa Username:docker}
I0722 10:43:52.668297   23202 ssh_runner.go:195] Run: sudo crictl images --output json
I0722 10:43:52.809348   23202 main.go:141] libmachine: Making call to close driver server
I0722 10:43:52.809359   23202 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:52.809628   23202 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:52.809642   23202 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:52.809659   23202 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:52.809676   23202 main.go:141] libmachine: Making call to close driver server
I0722 10:43:52.809687   23202 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:52.809996   23202 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:52.809992   23202 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:52.810036   23202 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941610 image ls --format yaml --alsologtostderr:
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 63d618772da56738b8dc265fd6f5fffb87f5910faa510a9ca238448653d22840
repoDigests:
- localhost/minikube-local-cache-test@sha256:e8066c4ecaa7fb3ac9291c4f158e8b0640b551e2f2ddbaf3adec216cd20795e8
repoTags:
- localhost/minikube-local-cache-test:functional-941610
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-941610
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941610 image ls --format yaml --alsologtostderr:
I0722 10:43:51.067135   23108 out.go:291] Setting OutFile to fd 1 ...
I0722 10:43:51.067261   23108 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:51.067271   23108 out.go:304] Setting ErrFile to fd 2...
I0722 10:43:51.067277   23108 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:51.067453   23108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
I0722 10:43:51.067984   23108 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:51.068097   23108 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:51.068480   23108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:51.068526   23108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:51.084088   23108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
I0722 10:43:51.084513   23108 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:51.085041   23108 main.go:141] libmachine: Using API Version  1
I0722 10:43:51.085067   23108 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:51.085483   23108 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:51.085709   23108 main.go:141] libmachine: (functional-941610) Calling .GetState
I0722 10:43:51.087444   23108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:51.087486   23108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:51.102444   23108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40767
I0722 10:43:51.102813   23108 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:51.103283   23108 main.go:141] libmachine: Using API Version  1
I0722 10:43:51.103304   23108 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:51.103573   23108 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:51.103756   23108 main.go:141] libmachine: (functional-941610) Calling .DriverName
I0722 10:43:51.103931   23108 ssh_runner.go:195] Run: systemctl --version
I0722 10:43:51.103950   23108 main.go:141] libmachine: (functional-941610) Calling .GetSSHHostname
I0722 10:43:51.106643   23108 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:51.107029   23108 main.go:141] libmachine: (functional-941610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e5:24", ip: ""} in network mk-functional-941610: {Iface:virbr1 ExpiryTime:2024-07-22 11:41:28 +0000 UTC Type:0 Mac:52:54:00:2e:e5:24 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-941610 Clientid:01:52:54:00:2e:e5:24}
I0722 10:43:51.107066   23108 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined IP address 192.168.39.245 and MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:51.107199   23108 main.go:141] libmachine: (functional-941610) Calling .GetSSHPort
I0722 10:43:51.107350   23108 main.go:141] libmachine: (functional-941610) Calling .GetSSHKeyPath
I0722 10:43:51.107526   23108 main.go:141] libmachine: (functional-941610) Calling .GetSSHUsername
I0722 10:43:51.107666   23108 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/functional-941610/id_rsa Username:docker}
I0722 10:43:51.232727   23108 ssh_runner.go:195] Run: sudo crictl images --output json
I0722 10:43:51.286207   23108 main.go:141] libmachine: Making call to close driver server
I0722 10:43:51.286222   23108 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:51.286545   23108 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:51.286545   23108 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:51.286586   23108 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:51.286598   23108 main.go:141] libmachine: Making call to close driver server
I0722 10:43:51.286610   23108 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:51.286854   23108 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:51.286897   23108 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:51.286919   23108 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh pgrep buildkitd: exit status 1 (197.037131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image build -t localhost/my-image:functional-941610 testdata/build --alsologtostderr
2024/07/22 10:43:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 image build -t localhost/my-image:functional-941610 testdata/build --alsologtostderr: (2.741115272s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-941610 image build -t localhost/my-image:functional-941610 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d83903631c6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-941610
--> feb61103271
Successfully tagged localhost/my-image:functional-941610
feb611032717c21cad16178634cf2be794b96f4f8e649727087dfac6f4ab3248
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-941610 image build -t localhost/my-image:functional-941610 testdata/build --alsologtostderr:
I0722 10:43:51.529508   23162 out.go:291] Setting OutFile to fd 1 ...
I0722 10:43:51.530182   23162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:51.530192   23162 out.go:304] Setting ErrFile to fd 2...
I0722 10:43:51.530196   23162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 10:43:51.530374   23162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
I0722 10:43:51.530902   23162 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:51.531545   23162 config.go:182] Loaded profile config "functional-941610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0722 10:43:51.531933   23162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:51.531991   23162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:51.548374   23162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
I0722 10:43:51.548900   23162 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:51.549488   23162 main.go:141] libmachine: Using API Version  1
I0722 10:43:51.549516   23162 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:51.549923   23162 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:51.550102   23162 main.go:141] libmachine: (functional-941610) Calling .GetState
I0722 10:43:51.551946   23162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0722 10:43:51.551986   23162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0722 10:43:51.566734   23162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
I0722 10:43:51.567215   23162 main.go:141] libmachine: () Calling .GetVersion
I0722 10:43:51.567753   23162 main.go:141] libmachine: Using API Version  1
I0722 10:43:51.567778   23162 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 10:43:51.568087   23162 main.go:141] libmachine: () Calling .GetMachineName
I0722 10:43:51.568267   23162 main.go:141] libmachine: (functional-941610) Calling .DriverName
I0722 10:43:51.568517   23162 ssh_runner.go:195] Run: systemctl --version
I0722 10:43:51.568552   23162 main.go:141] libmachine: (functional-941610) Calling .GetSSHHostname
I0722 10:43:51.571786   23162 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:51.572291   23162 main.go:141] libmachine: (functional-941610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e5:24", ip: ""} in network mk-functional-941610: {Iface:virbr1 ExpiryTime:2024-07-22 11:41:28 +0000 UTC Type:0 Mac:52:54:00:2e:e5:24 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-941610 Clientid:01:52:54:00:2e:e5:24}
I0722 10:43:51.572324   23162 main.go:141] libmachine: (functional-941610) DBG | domain functional-941610 has defined IP address 192.168.39.245 and MAC address 52:54:00:2e:e5:24 in network mk-functional-941610
I0722 10:43:51.572528   23162 main.go:141] libmachine: (functional-941610) Calling .GetSSHPort
I0722 10:43:51.572726   23162 main.go:141] libmachine: (functional-941610) Calling .GetSSHKeyPath
I0722 10:43:51.572883   23162 main.go:141] libmachine: (functional-941610) Calling .GetSSHUsername
I0722 10:43:51.573071   23162 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/functional-941610/id_rsa Username:docker}
I0722 10:43:51.691822   23162 build_images.go:161] Building image from path: /tmp/build.4128680836.tar
I0722 10:43:51.691889   23162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0722 10:43:51.712011   23162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4128680836.tar
I0722 10:43:51.730613   23162 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4128680836.tar: stat -c "%s %y" /var/lib/minikube/build/build.4128680836.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4128680836.tar': No such file or directory
I0722 10:43:51.730654   23162 ssh_runner.go:362] scp /tmp/build.4128680836.tar --> /var/lib/minikube/build/build.4128680836.tar (3072 bytes)
I0722 10:43:51.757317   23162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4128680836
I0722 10:43:51.767556   23162 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4128680836 -xf /var/lib/minikube/build/build.4128680836.tar
I0722 10:43:51.777983   23162 crio.go:315] Building image: /var/lib/minikube/build/build.4128680836
I0722 10:43:51.778062   23162 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-941610 /var/lib/minikube/build/build.4128680836 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0722 10:43:54.173645   23162 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-941610 /var/lib/minikube/build/build.4128680836 --cgroup-manager=cgroupfs: (2.395555837s)
I0722 10:43:54.173712   23162 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4128680836
I0722 10:43:54.200468   23162 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4128680836.tar
I0722 10:43:54.224839   23162 build_images.go:217] Built localhost/my-image:functional-941610 from /tmp/build.4128680836.tar
I0722 10:43:54.224876   23162 build_images.go:133] succeeded building to: functional-941610
I0722 10:43:54.224883   23162 build_images.go:134] failed building to: 
I0722 10:43:54.224908   23162 main.go:141] libmachine: Making call to close driver server
I0722 10:43:54.224920   23162 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:54.225206   23162 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:54.225223   23162 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 10:43:54.225225   23162 main.go:141] libmachine: (functional-941610) DBG | Closing plugin on server side
I0722 10:43:54.225231   23162 main.go:141] libmachine: Making call to close driver server
I0722 10:43:54.225239   23162 main.go:141] libmachine: (functional-941610) Calling .Close
I0722 10:43:54.225451   23162 main.go:141] libmachine: Successfully made call to close driver server
I0722 10:43:54.225463   23162 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-941610
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image load --daemon docker.io/kicbase/echo-server:functional-941610 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-941610 image load --daemon docker.io/kicbase/echo-server:functional-941610 --alsologtostderr: (1.958185354s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image load --daemon docker.io/kicbase/echo-server:functional-941610 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-941610
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image load --daemon docker.io/kicbase/echo-server:functional-941610 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image save docker.io/kicbase/echo-server:functional-941610 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image rm docker.io/kicbase/echo-server:functional-941610 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-941610
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 image save --daemon docker.io/kicbase/echo-server:functional-941610 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-941610
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "207.679867ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "40.926275ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "214.338843ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.348097ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdany-port2075338660/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721645019256368863" to /tmp/TestFunctionalparallelMountCmdany-port2075338660/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721645019256368863" to /tmp/TestFunctionalparallelMountCmdany-port2075338660/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721645019256368863" to /tmp/TestFunctionalparallelMountCmdany-port2075338660/001/test-1721645019256368863
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.958743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 22 10:43 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 22 10:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 22 10:43 test-1721645019256368863
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh cat /mount-9p/test-1721645019256368863
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-941610 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [31b5c600-5f6b-4913-8638-d26b7e466b73] Pending
helpers_test.go:344: "busybox-mount" [31b5c600-5f6b-4913-8638-d26b7e466b73] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [31b5c600-5f6b-4913-8638-d26b7e466b73] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [31b5c600-5f6b-4913-8638-d26b7e466b73] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003250519s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-941610 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdany-port2075338660/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service list -o json
functional_test.go:1490: Took "870.750869ms" to run "out/minikube-linux-amd64 -p functional-941610 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdspecific-port2957008484/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.825432ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdspecific-port2957008484/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "sudo umount -f /mount-9p": exit status 1 (328.989284ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-941610 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdspecific-port2957008484/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.245:32118
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.245:32118
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T" /mount1: exit status 1 (265.034227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-941610 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-941610 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-941610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup993641779/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-941610
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-941610
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-941610
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-461283 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 10:47:04.296570   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 10:48:29.087924   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.093233   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.103455   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.123696   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.163999   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.244318   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.404740   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:29.725323   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:30.365728   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:31.646571   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:34.207424   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:39.327839   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:48:49.568933   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:49:10.049655   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 10:49:51.009807   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-461283 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.69421753s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-461283 -- rollout status deployment/busybox: (3.556923853s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-bf5vn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-cgtcl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-hkw9v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-bf5vn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-cgtcl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-hkw9v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-bf5vn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-cgtcl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-hkw9v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-bf5vn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-bf5vn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-cgtcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-cgtcl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-hkw9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-461283 -- exec busybox-fc5497c4f-hkw9v -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-461283 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-461283 -v=7 --alsologtostderr: (54.310836006s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-461283 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp testdata/cp-test.txt ha-461283:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283:/home/docker/cp-test.txt ha-461283-m02:/home/docker/cp-test_ha-461283_ha-461283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test_ha-461283_ha-461283-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283:/home/docker/cp-test.txt ha-461283-m03:/home/docker/cp-test_ha-461283_ha-461283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test_ha-461283_ha-461283-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283:/home/docker/cp-test.txt ha-461283-m04:/home/docker/cp-test_ha-461283_ha-461283-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test_ha-461283_ha-461283-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp testdata/cp-test.txt ha-461283-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m02:/home/docker/cp-test.txt ha-461283:/home/docker/cp-test_ha-461283-m02_ha-461283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test_ha-461283-m02_ha-461283.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m02:/home/docker/cp-test.txt ha-461283-m03:/home/docker/cp-test_ha-461283-m02_ha-461283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test_ha-461283-m02_ha-461283-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m02:/home/docker/cp-test.txt ha-461283-m04:/home/docker/cp-test_ha-461283-m02_ha-461283-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test_ha-461283-m02_ha-461283-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp testdata/cp-test.txt ha-461283-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt ha-461283:/home/docker/cp-test_ha-461283-m03_ha-461283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test_ha-461283-m03_ha-461283.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt ha-461283-m02:/home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test_ha-461283-m03_ha-461283-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m03:/home/docker/cp-test.txt ha-461283-m04:/home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test_ha-461283-m03_ha-461283-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp testdata/cp-test.txt ha-461283-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3161647133/001/cp-test_ha-461283-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt ha-461283:/home/docker/cp-test_ha-461283-m04_ha-461283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283 "sudo cat /home/docker/cp-test_ha-461283-m04_ha-461283.txt"
E0722 10:51:12.930239   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt ha-461283-m02:/home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m02 "sudo cat /home/docker/cp-test_ha-461283-m04_ha-461283-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 cp ha-461283-m04:/home/docker/cp-test.txt ha-461283-m03:/home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 ssh -n ha-461283-m03 "sudo cat /home/docker/cp-test_ha-461283-m04_ha-461283-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.447296294s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-461283 node delete m03 -v=7 --alsologtostderr: (16.430361904s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (327.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-461283 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 11:03:29.088911   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 11:04:52.132048   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 11:06:36.610965   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 11:08:29.087165   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-461283 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m26.773888343s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (327.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-461283 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-461283 --control-plane -v=7 --alsologtostderr: (1m11.859142578s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-461283 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-320034 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-320034 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (57.531866301s)
--- PASS: TestJSONOutput/start/Command (57.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-320034 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-320034 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-320034 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-320034 --output=json --user=testUser: (7.374418526s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-182919 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-182919 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.093852ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44e0873e-f752-4bbd-bc4e-136473906777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-182919] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a19c6055-410a-4a4d-bdb6-c65b5a62949d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19313"}}
	{"specversion":"1.0","id":"e54b271f-c750-4a9c-9a56-3833131996e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7c8fa5db-e6f7-4c14-9eef-5f2f231777db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig"}}
	{"specversion":"1.0","id":"122b5fda-f195-4ad2-87d1-9542c3905e54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube"}}
	{"specversion":"1.0","id":"6e20fe78-178e-4cf7-82c8-a0078969d7c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a4c21904-e7e8-4631-9e01-325ab022b904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c62676b0-b5c0-4e57-b1ab-608f9db5e92d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-182919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-182919
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-960687 --driver=kvm2  --container-runtime=crio
E0722 11:11:36.610617   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-960687 --driver=kvm2  --container-runtime=crio: (40.123205739s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-963866 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-963866 --driver=kvm2  --container-runtime=crio: (47.16915641s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-960687
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-963866
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-963866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-963866
helpers_test.go:175: Cleaning up "first-960687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-960687
--- PASS: TestMinikubeProfile (90.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-080085 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-080085 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.998412309s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-080085 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-080085 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.502152087s)
E0722 11:13:29.087679   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (26.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-080085 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-097854
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-097854: (1.262245038s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097854
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097854: (20.983455395s)
--- PASS: TestMountStart/serial/RestartStopped (21.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025157 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 11:14:39.658489   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025157 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.03652109s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-025157 -- rollout status deployment/busybox: (2.227636879s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-65kqg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-pd65c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-65kqg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-pd65c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-65kqg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-pd65c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-65kqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-65kqg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-pd65c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025157 -- exec busybox-fc5497c4f-pd65c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-025157 -v 3 --alsologtostderr
E0722 11:16:36.610964   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-025157 -v 3 --alsologtostderr: (44.88635249s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-025157 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp testdata/cp-test.txt multinode-025157:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157:/home/docker/cp-test.txt multinode-025157-m02:/home/docker/cp-test_multinode-025157_multinode-025157-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test_multinode-025157_multinode-025157-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157:/home/docker/cp-test.txt multinode-025157-m03:/home/docker/cp-test_multinode-025157_multinode-025157-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test_multinode-025157_multinode-025157-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp testdata/cp-test.txt multinode-025157-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt multinode-025157:/home/docker/cp-test_multinode-025157-m02_multinode-025157.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test_multinode-025157-m02_multinode-025157.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m02:/home/docker/cp-test.txt multinode-025157-m03:/home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test_multinode-025157-m02_multinode-025157-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp testdata/cp-test.txt multinode-025157-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile430864957/001/cp-test_multinode-025157-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt multinode-025157:/home/docker/cp-test_multinode-025157-m03_multinode-025157.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157 "sudo cat /home/docker/cp-test_multinode-025157-m03_multinode-025157.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 cp multinode-025157-m03:/home/docker/cp-test.txt multinode-025157-m02:/home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 ssh -n multinode-025157-m02 "sudo cat /home/docker/cp-test_multinode-025157-m03_multinode-025157-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-025157 node stop m03: (1.416352508s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025157 status: exit status 7 (403.590394ms)

                                                
                                                
-- stdout --
	multinode-025157
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025157-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025157-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr: exit status 7 (404.982622ms)

                                                
                                                
-- stdout --
	multinode-025157
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025157-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025157-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:16:49.025002   41212 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:16:49.025258   41212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:16:49.025268   41212 out.go:304] Setting ErrFile to fd 2...
	I0722 11:16:49.025272   41212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:16:49.025468   41212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:16:49.025662   41212 out.go:298] Setting JSON to false
	I0722 11:16:49.025693   41212 mustload.go:65] Loading cluster: multinode-025157
	I0722 11:16:49.025810   41212 notify.go:220] Checking for updates...
	I0722 11:16:49.026121   41212 config.go:182] Loaded profile config "multinode-025157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:16:49.026135   41212 status.go:255] checking status of multinode-025157 ...
	I0722 11:16:49.026535   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.026594   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.045222   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0722 11:16:49.045610   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.046145   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.046163   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.046527   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.046712   41212 main.go:141] libmachine: (multinode-025157) Calling .GetState
	I0722 11:16:49.048216   41212 status.go:330] multinode-025157 host status = "Running" (err=<nil>)
	I0722 11:16:49.048236   41212 host.go:66] Checking if "multinode-025157" exists ...
	I0722 11:16:49.048570   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.048602   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.063103   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0722 11:16:49.063393   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.063825   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.063850   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.064153   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.064341   41212 main.go:141] libmachine: (multinode-025157) Calling .GetIP
	I0722 11:16:49.067066   41212 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:16:49.067459   41212 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:16:49.067494   41212 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:16:49.067619   41212 host.go:66] Checking if "multinode-025157" exists ...
	I0722 11:16:49.067904   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.067943   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.082751   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0722 11:16:49.083131   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.083529   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.083546   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.083826   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.084026   41212 main.go:141] libmachine: (multinode-025157) Calling .DriverName
	I0722 11:16:49.084231   41212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 11:16:49.084249   41212 main.go:141] libmachine: (multinode-025157) Calling .GetSSHHostname
	I0722 11:16:49.086917   41212 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:16:49.087299   41212 main.go:141] libmachine: (multinode-025157) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:6e:1a", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:14:09 +0000 UTC Type:0 Mac:52:54:00:d6:6e:1a Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-025157 Clientid:01:52:54:00:d6:6e:1a}
	I0722 11:16:49.087330   41212 main.go:141] libmachine: (multinode-025157) DBG | domain multinode-025157 has defined IP address 192.168.39.158 and MAC address 52:54:00:d6:6e:1a in network mk-multinode-025157
	I0722 11:16:49.087444   41212 main.go:141] libmachine: (multinode-025157) Calling .GetSSHPort
	I0722 11:16:49.087599   41212 main.go:141] libmachine: (multinode-025157) Calling .GetSSHKeyPath
	I0722 11:16:49.087758   41212 main.go:141] libmachine: (multinode-025157) Calling .GetSSHUsername
	I0722 11:16:49.087913   41212 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157/id_rsa Username:docker}
	I0722 11:16:49.171565   41212 ssh_runner.go:195] Run: systemctl --version
	I0722 11:16:49.177582   41212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:16:49.192317   41212 kubeconfig.go:125] found "multinode-025157" server: "https://192.168.39.158:8443"
	I0722 11:16:49.192343   41212 api_server.go:166] Checking apiserver status ...
	I0722 11:16:49.192410   41212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 11:16:49.206324   41212 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup
	W0722 11:16:49.215988   41212 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 11:16:49.216026   41212 ssh_runner.go:195] Run: ls
	I0722 11:16:49.220228   41212 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0722 11:16:49.224098   41212 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0722 11:16:49.224116   41212 status.go:422] multinode-025157 apiserver status = Running (err=<nil>)
	I0722 11:16:49.224125   41212 status.go:257] multinode-025157 status: &{Name:multinode-025157 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 11:16:49.224146   41212 status.go:255] checking status of multinode-025157-m02 ...
	I0722 11:16:49.224450   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.224482   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.239359   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0722 11:16:49.239767   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.240244   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.240267   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.240561   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.240717   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetState
	I0722 11:16:49.242202   41212 status.go:330] multinode-025157-m02 host status = "Running" (err=<nil>)
	I0722 11:16:49.242218   41212 host.go:66] Checking if "multinode-025157-m02" exists ...
	I0722 11:16:49.242527   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.242586   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.257507   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0722 11:16:49.257913   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.258307   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.258327   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.258693   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.258859   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetIP
	I0722 11:16:49.261624   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | domain multinode-025157-m02 has defined MAC address 52:54:00:81:30:46 in network mk-multinode-025157
	I0722 11:16:49.262007   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:30:46", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:15:18 +0000 UTC Type:0 Mac:52:54:00:81:30:46 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-025157-m02 Clientid:01:52:54:00:81:30:46}
	I0722 11:16:49.262045   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | domain multinode-025157-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:81:30:46 in network mk-multinode-025157
	I0722 11:16:49.262210   41212 host.go:66] Checking if "multinode-025157-m02" exists ...
	I0722 11:16:49.262519   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.262550   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.280236   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0722 11:16:49.280622   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.281050   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.281071   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.281366   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.281539   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .DriverName
	I0722 11:16:49.281736   41212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 11:16:49.281772   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetSSHHostname
	I0722 11:16:49.284430   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | domain multinode-025157-m02 has defined MAC address 52:54:00:81:30:46 in network mk-multinode-025157
	I0722 11:16:49.284840   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:30:46", ip: ""} in network mk-multinode-025157: {Iface:virbr1 ExpiryTime:2024-07-22 12:15:18 +0000 UTC Type:0 Mac:52:54:00:81:30:46 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-025157-m02 Clientid:01:52:54:00:81:30:46}
	I0722 11:16:49.284858   41212 main.go:141] libmachine: (multinode-025157-m02) DBG | domain multinode-025157-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:81:30:46 in network mk-multinode-025157
	I0722 11:16:49.284981   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetSSHPort
	I0722 11:16:49.285146   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetSSHKeyPath
	I0722 11:16:49.285287   41212 main.go:141] libmachine: (multinode-025157-m02) Calling .GetSSHUsername
	I0722 11:16:49.285440   41212 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19313-5960/.minikube/machines/multinode-025157-m02/id_rsa Username:docker}
	I0722 11:16:49.359447   41212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 11:16:49.373190   41212 status.go:257] multinode-025157-m02 status: &{Name:multinode-025157-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 11:16:49.373226   41212 status.go:255] checking status of multinode-025157-m03 ...
	I0722 11:16:49.373667   41212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 11:16:49.373731   41212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 11:16:49.389023   41212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 11:16:49.389404   41212 main.go:141] libmachine: () Calling .GetVersion
	I0722 11:16:49.389844   41212 main.go:141] libmachine: Using API Version  1
	I0722 11:16:49.389866   41212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 11:16:49.390184   41212 main.go:141] libmachine: () Calling .GetMachineName
	I0722 11:16:49.390353   41212 main.go:141] libmachine: (multinode-025157-m03) Calling .GetState
	I0722 11:16:49.391754   41212 status.go:330] multinode-025157-m03 host status = "Stopped" (err=<nil>)
	I0722 11:16:49.391767   41212 status.go:343] host is not running, skipping remaining checks
	I0722 11:16:49.391774   41212 status.go:257] multinode-025157-m03 status: &{Name:multinode-025157-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-025157 node start m03 -v=7 --alsologtostderr: (37.006632342s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-025157 node delete m03: (1.887101878s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025157 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 11:26:36.612213   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025157 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.64011764s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025157 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025157
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025157-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-025157-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.442578ms)

                                                
                                                
-- stdout --
	* [multinode-025157-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-025157-m02' is duplicated with machine name 'multinode-025157-m02' in profile 'multinode-025157'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025157-m03 --driver=kvm2  --container-runtime=crio
E0722 11:28:29.088924   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025157-m03 --driver=kvm2  --container-runtime=crio: (42.681184755s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-025157
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-025157: exit status 80 (205.231273ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-025157 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-025157-m03 already exists in multinode-025157-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-025157-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.94s)

                                                
                                    
x
+
TestScheduledStopUnix (112.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-281697 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-281697 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.841209414s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-281697 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-281697 -n scheduled-stop-281697
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-281697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-281697 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-281697 -n scheduled-stop-281697
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-281697
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-281697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-281697
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-281697: exit status 7 (64.324109ms)

                                                
                                                
-- stdout --
	scheduled-stop-281697
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-281697 -n scheduled-stop-281697
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-281697 -n scheduled-stop-281697: exit status 7 (63.581249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-281697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-281697
--- PASS: TestScheduledStopUnix (112.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4168976363 start -p running-upgrade-555273 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4168976363 start -p running-upgrade-555273 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.715446428s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-555273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-555273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.705020739s)
helpers_test.go:175: Cleaning up "running-upgrade-555273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-555273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-555273: (1.21719016s)
--- PASS: TestRunningBinaryUpgrade (215.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.063227ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-543094] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-543094 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-543094 --driver=kvm2  --container-runtime=crio: (1m35.435598169s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-543094 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.67s)

                                                
                                    
x
+
TestPause/serial/Start (125.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812059 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-812059 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m5.618417742s)
--- PASS: TestPause/serial/Start (125.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (63.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m2.067288697s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-543094 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-543094 status -o json: exit status 2 (245.015526ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-543094","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-543094
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (63.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0722 11:38:12.134485   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 11:38:29.088532   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-543094 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.841370317s)
--- PASS: TestNoKubernetes/serial/Start (25.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-543094 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-543094 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.548969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-543094
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-543094: (1.265269831s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-543094 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-543094 --driver=kvm2  --container-runtime=crio: (20.858761186s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812059 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-812059 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.622946252s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-543094 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-543094 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.409864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-511820 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-511820 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.953103ms)

                                                
                                                
-- stdout --
	* [false-511820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19313
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 11:38:59.576155   52141 out.go:291] Setting OutFile to fd 1 ...
	I0722 11:38:59.576420   52141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:38:59.576431   52141 out.go:304] Setting ErrFile to fd 2...
	I0722 11:38:59.576440   52141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 11:38:59.576611   52141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19313-5960/.minikube/bin
	I0722 11:38:59.577143   52141 out.go:298] Setting JSON to false
	I0722 11:38:59.578034   52141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4892,"bootTime":1721643448,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 11:38:59.578091   52141 start.go:139] virtualization: kvm guest
	I0722 11:38:59.580100   52141 out.go:177] * [false-511820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 11:38:59.581615   52141 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 11:38:59.581647   52141 notify.go:220] Checking for updates...
	I0722 11:38:59.584306   52141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 11:38:59.585692   52141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19313-5960/kubeconfig
	I0722 11:38:59.587112   52141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19313-5960/.minikube
	I0722 11:38:59.588411   52141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 11:38:59.589791   52141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 11:38:59.591743   52141 config.go:182] Loaded profile config "kubernetes-upgrade-651148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 11:38:59.591970   52141 config.go:182] Loaded profile config "pause-812059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 11:38:59.592121   52141 config.go:182] Loaded profile config "running-upgrade-555273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0722 11:38:59.592255   52141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 11:38:59.629257   52141 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 11:38:59.630605   52141 start.go:297] selected driver: kvm2
	I0722 11:38:59.630628   52141 start.go:901] validating driver "kvm2" against <nil>
	I0722 11:38:59.630642   52141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 11:38:59.632765   52141 out.go:177] 
	W0722 11:38:59.634067   52141 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0722 11:38:59.635231   52141 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-511820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.145:8443
name: pause-812059
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:59 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.178:8443
name: running-upgrade-555273
contexts:
- context:
cluster: pause-812059
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-812059
name: pause-812059
- context:
cluster: running-upgrade-555273
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:59 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-555273
name: running-upgrade-555273
current-context: running-upgrade-555273
kind: Config
preferences: {}
users:
- name: pause-812059
user:
client-certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.crt
client-key: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.key
- name: running-upgrade-555273
user:
client-certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/running-upgrade-555273/client.crt
client-key: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/running-upgrade-555273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-511820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511820"

                                                
                                                
----------------------- debugLogs end: false-511820 [took: 2.873233114s] --------------------------------
helpers_test.go:175: Cleaning up "false-511820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-511820
--- PASS: TestNetworkPlugins/group/false (3.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3302412968 start -p stopped-upgrade-006328 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3302412968 start -p stopped-upgrade-006328 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (51.509534394s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3302412968 -p stopped-upgrade-006328 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3302412968 -p stopped-upgrade-006328 stop: (2.141320661s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-006328 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-006328 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.798448489s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812059 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-812059 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-812059 --output=json --layout=cluster: exit status 2 (247.573578ms)

                                                
                                                
-- stdout --
	{"Name":"pause-812059","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-812059","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-812059 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812059 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-812059 --alsologtostderr -v=5: (1.033379004s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-812059 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-812059 --alsologtostderr -v=5: (1.032630827s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-006328
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (112.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-339929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0722 11:41:36.611338   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-339929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m52.310284277s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (112.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-339929 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [678b18f4-8ef0-447a-ab88-a413ebcfaac7] Pending
helpers_test.go:344: "busybox" [678b18f4-8ef0-447a-ab88-a413ebcfaac7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [678b18f4-8ef0-447a-ab88-a413ebcfaac7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003664627s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-339929 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-339929 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-339929 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-802149 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-802149 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (59.387852181s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-802149 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2ba5d4f-028c-4438-96b9-f4a26a174c0d] Pending
helpers_test.go:344: "busybox" [c2ba5d4f-028c-4438-96b9-f4a26a174c0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2ba5d4f-028c-4438-96b9-f4a26a174c0d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004389438s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-802149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-802149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-802149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-605740 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-605740 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m37.331751463s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (685.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-339929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-339929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m25.061251264s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-339929 -n no-preload-339929
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (685.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [667661cc-98af-44ea-bc9d-5685d1c143cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0722 11:46:36.611327   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
helpers_test.go:344: "busybox" [667661cc-98af-44ea-bc9d-5685d1c143cb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004304649s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-605740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-605740 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (539.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-802149 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-802149 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m59.16329341s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-802149 -n embed-certs-802149
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (539.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-101261 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-101261 --alsologtostderr -v=3: (4.314969411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-101261 -n old-k8s-version-101261: exit status 7 (63.777537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-101261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (465.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-605740 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 11:51:36.611571   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
E0722 11:53:29.087336   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 11:54:52.135715   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-605740 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (7m45.371674784s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-605740 -n default-k8s-diff-port-605740
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (465.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-355657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-355657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (47.346381747s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (107.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0722 12:11:32.136438   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
E0722 12:11:36.610905   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/addons-362127/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m47.590255815s)
--- PASS: TestNetworkPlugins/group/auto/Start (107.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-355657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-355657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107660149s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-355657 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-355657 --alsologtostderr -v=3: (11.343953986s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-355657 -n newest-cni-355657
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-355657 -n newest-cni-355657: exit status 7 (62.825312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-355657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-355657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-355657 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (35.17358139s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-355657 -n newest-cni-355657
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-355657 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-355657 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-355657 -n newest-cni-355657
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-355657 -n newest-cni-355657: exit status 2 (234.809149ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-355657 -n newest-cni-355657
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-355657 -n newest-cni-355657: exit status 2 (238.927123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-355657 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-355657 -n newest-cni-355657
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-355657 -n newest-cni-355657
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m13.442372772s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r9cqm" [ee011990-98e7-45f1-aa99-6eaf91036b24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 12:13:18.673593   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.679633   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.689898   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.710833   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.751216   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.832250   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:18.992364   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:19.313556   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:19.954449   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
E0722 12:13:21.234775   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-r9cqm" [ee011990-98e7-45f1-aa99-6eaf91036b24] Running
E0722 12:13:23.795471   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00449502s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-511820 exec deployment/netcat -- nslookup kubernetes.default
E0722 12:13:28.916678   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0722 12:13:29.087548   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/functional-941610/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m21.684169036s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (105.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0722 12:13:59.638082   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/no-preload-339929/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m45.368522228s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (105.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cx6pn" [fa8db837-7a83-42d4-ad9f-1887a9471816] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005495368s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-p4cpb" [3111b056-3ea3-4ca7-abb2-19700660393f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-p4cpb" [3111b056-3ea3-4ca7-abb2-19700660393f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003892983s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-511820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0722 12:14:46.149911   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.609295526s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.091195548s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vlvzc" [1136c0d8-929b-49ff-a78f-7245f70244fa] Running
E0722 12:15:06.630449   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/old-k8s-version-101261/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00871343s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ztxld" [c65fe89a-0c85-4e09-89df-3540017e9588] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ztxld" [c65fe89a-0c85-4e09-89df-3540017e9588] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004266488s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-511820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6ks9h" [bf52db45-4a79-469d-8117-96ec1df1d9be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6ks9h" [bf52db45-4a79-469d-8117-96ec1df1d9be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005431718s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-49s2b" [c5b1f4bf-d16c-43f9-b46c-289a67f13821] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-49s2b" [c5b1f4bf-d16c-43f9-b46c-289a67f13821] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005810657s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-511820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m4.07382887s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-511820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511820 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-511820 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.240963752s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511820 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-511820 exec deployment/netcat -- nslookup kubernetes.default: (10.158925926s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xnlqg" [2b023cb5-033b-4757-86a1-dfcdca0297cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004621549s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8ssxr" [7387463b-49b5-4866-b2de-16ca5688711b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 12:16:34.305891   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.311167   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.321467   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.341757   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.382078   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.462888   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
E0722 12:16:34.623620   13098 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/default-k8s-diff-port-605740/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-8ssxr" [7387463b-49b5-4866-b2de-16ca5688711b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004717212s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-511820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-511820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-511820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lgnxb" [97cea66f-3b30-4d42-b6bd-3cfa888a07ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lgnxb" [97cea66f-3b30-4d42-b6bd-3cfa888a07ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004269664s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-511820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-511820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.13
281 TestNetworkPlugins/group/kubenet 2.78
289 TestNetworkPlugins/group/cilium 3.05
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-737017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-737017
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-511820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.145:8443
name: pause-812059
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.178:8443
name: running-upgrade-555273
contexts:
- context:
cluster: pause-812059
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-812059
name: pause-812059
- context:
cluster: running-upgrade-555273
user: running-upgrade-555273
name: running-upgrade-555273
current-context: running-upgrade-555273
kind: Config
preferences: {}
users:
- name: pause-812059
user:
client-certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.crt
client-key: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.key
- name: running-upgrade-555273
user:
client-certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/running-upgrade-555273/client.crt
client-key: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/running-upgrade-555273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-511820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511820"

                                                
                                                
----------------------- debugLogs end: kubenet-511820 [took: 2.633386066s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-511820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-511820
--- SKIP: TestNetworkPlugins/group/kubenet (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-511820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-511820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19313-5960/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.145:8443
name: pause-812059
contexts:
- context:
cluster: pause-812059
extensions:
- extension:
last-update: Mon, 22 Jul 2024 11:38:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-812059
name: pause-812059
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-812059
user:
client-certificate: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.crt
client-key: /home/jenkins/minikube-integration/19313-5960/.minikube/profiles/pause-812059/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-511820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-511820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511820"

                                                
                                                
----------------------- debugLogs end: cilium-511820 [took: 2.907005398s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-511820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-511820
--- SKIP: TestNetworkPlugins/group/cilium (3.05s)

                                                
                                    
Copied to clipboard